4.5 DBS (Data Broadcasting Set of rules)

This set of rules been designed to be efficient in transmitting a data-stream from a splitter node to peers in the network when unicast transmissions are used between the nodes of the team.

  1. Chunk scheduling: Chunks are transmitted from the splitter to peers, then among peers (see Figure ??). The splitter sends the n-th chunk to the peer Pi if
    (i + n) mod |T| = 0, (2)

    being |T| the number of peers in the team. Next, Pi must forward this chunk to the rest of peers of the team. Chunks received from other peers are not retransmitted.

  2. Congestion avoidance in peers: Each peer sends chunks using a constant bit-rate strategy to minimize the congestion of its uploading link. Notice that the rate of chunks that arrive to a peer is a good metric to perform this control in networks with a reasonable low packet loss ratio.
  3. Burst mode in peers: The congestion avoidance mode is immediately abandoned if a new chunk has been received from the splitter before the previous chunk has been retransmitted to the rest of peers of the team. In the burst mode the peer sends the previously received chunk from the splitter to the rest of peers of the team as soon as possible. In other words, the peer sends the previous chunk to the rest of peers of the list (of peers) as faster as it can. Notice that although this behaviour is potentially a source of congestion, it is expectable a small number of chunks will be sent in the burst mode under a reasonable low packet loss ratio.
  4. The list of peers: Every node of the team (splitter and peers) knows the endpoint P = (P.IP address,P.port) of the rest of peers in the team. A list is built with this information which is used by the splitter to send the chunks to the peers and is used by the peers to forward the received chunks to the other peers.
  5. Peer arrivals: An incoming peer X must contact with the splitter in order to join the team. After that, the splitter sends to X the list of peers and the current stream header over the TCP. More exactely, the splitter does:
    1. Send (over TCP) to X the number of peers in the list of peers.
    2. For each peer Pi in the list of peers:
      1. Send (TCP) to X the Pi endpoint.
    3. Append X to the list of peers.

    In incomming peer X performs:

    1. Receive (TCP) from X the number of peers in the list of peers.
    2. For each peer Pi in the list of peers:
      1. Receive (TCP) end endpoint Pi fron the splitter.
      2. Send (UDP) to Pi a [hello] message.

    Because the [hello] messages can be lost, some peer of the team could not know X in this presentation. However, because peers also learh about their neighbors when a [IMS] message is received, the impact of these lost should be small.

  6. Free-riding control in peers: The main idea behind the DBS is that in a large enough interval of time, any peer must relay the same amount of data that it receives. If a (infra-solidary) peer can not enforce this rule, it must leave the team and join another team that requires less bandwidth. In order to achieve this, each peer Pi assigns a counter to each other peer Pj of the team. When a chunk is sent to Pj, its counter Pj is incremented and when a chunk is received from it, Pj is decremented. If Pj reaches a given threshold, Pj is deleted from the list of peers pf Pi and it will not be served any more by Pi.

    Notice that this rule will remove from the peer’s lists those peers that perform a impolite churn (those peers that leave the team without sending the [goodbye] message).


    PIC


    Figure 3: A typical P2PSP configuration using a monitor-peer (P0). Notice that the monitor-peer, the source and the splitter run on the same host.

  7. Monitor peers: Some peers (see P0 in Figure 3), which usually run close (in hops) the splitter, play different roles depending on the P2PSP modules implemented. Among others:
    1. As a consequence of the impolite churn and peer insolidarity, it is unrealistic to think that a single video source can feed a large number of peers and simultaneously to expect that the users will experience a high QoS. For this reason, the team adminitrator should monitorize the streaming session because, if the media is correctly played by the monitor peer, then there is a high degree of probability that the peers of the team are correctly playing the media too.
    2. At least one monitor peer is created before any other peer in the team and for this reason the transmission rate of the first monitor peer is 0. However, the transmission rate of the second (first standard) peer, and the monitor peer, is:
      B2,

      where B is the average encoding rate of the stream. When the size of the team is |T|, the transmission rate of all peers (included the monitor peers, obviously) of the team is:

      B |T| |T| + 1. (3)

      Therefore, only the first (monitor) peer is included in the team without a initial transmission requirement. Notice also that

      lim|T|B |T| |T| + 1 = B, (4)

      which means that when the team is large enough, all the peers of the team will transmitt the same amount of data that they receive.

    3. In order to minimize the number of loss reports (see Rule 10) Section 4.7) in the team, the monitor peers are the only entities allowed to complain to the splitter about lost chunks.
  8. Peer departures: Peers are required to send a [goodbye] message to the splitter and the rest of peers of the team when they leave the team, in order the splitter can stop sending chunks to them as soon as possible. However, if a peer Pi leaves without notification no more chunks will be received from it. This should trigger the following succession of events:
    1. In the rest of peers {Pj,ij}, the free-riding control mechanism (see Rule ??) will remove Pi from the list of peers.
    2. All monitor peers will complain to the splitter about chunks that the splitter has sent to Pi.
    3. After receiving a sufficient number of complains, the splitter will delete Pi from his list.
  9. Relation between the buffer size B and the team size |T|: As in the IMS module, peers need to buffer some chunks before the playback. However, the main reason of buffering in the DBS is not the network jitter but the overlay jitter. As it has been defined in Rule 1, peers retransmit the [IMS] messages received from the splitter to the rest of the team. Also, it has been specified in Rule 2 that peers send these messages using the chunk-rate of the stream. Therefore, depending on the position of a peer X in the list of peers of the peer Y , it can last more or less chunk times for Y sending the [IMS] message to X.

    In order to handle this unpredictable retransmission delay, the peer’s buffers should store at least |T| chunks. This means that, the team size is limited by the buffer size, i.e., in the DBS module it must be hold that

    |T| B. (5)
  10. Chunk tracking at the splitter: In order to identify unsupportive peers (free-riding), the splitter remembers the numbers of the sent chunks to each peer among the last B chunks. Only the monitor peers will complain about lost chunks x to the splitter using [lost chunk number x] complain report messages. In the DBS module, a chunk is clasiffied as lost when it is time to send it to the player and the chunk is missing.
  11. Free-riding control in the splitter: In this module it is compulsory that peers contribute to the team the same amount of data they receive from the team (always in the conditions imposed by the Equation 4). In order to guarantee this, the splitter counts the number of complains (sent by the monitor(s) peer(s)) that a peer produces, for all the peers of the team. If this number exceeds a given threshold, then the unsupportive peer will be rejected from the team, first removed from the list of the splitter and next from the lists of all peers of the team (see 6.