2 Some networking facts
This section enumerates a list of facts that typically are found in those current data
transmission scenarios which are based on computer networks.
- Routers deliever datagrams, not streams between connected devices: In
other words, the cost (in terms of bandwidth) of sending two or more packets
to two or more different hosts is equal to the cost of sending two or more
packets at the same host.
- Redundancy enables data compression: Some data link protocols, such as
the PPP (Point-to-Point Protocol), can compress the payloads in order to
decrease the size of the packets. Therefore, assuming a constant packet size,
it is expectable that the cost of sending two or more identical packets on
content should be less than or equal to the cost of sending two or more packs
of the same size but with different content.
- IP multicast availability: Although IP multicasting is disabled at global scale,
locally is usually available and is this case, it is the most efficient way of
data broadcasting. For this reason, network level multicasting should be used
whenever possible.
- Encapsulation overhead: Each block of media content (which it will be
referred as a “chunk” in the rest of this document) sent between peers must
undergo a process of encapsulation which is basically be adding a header and
sometimes a trailer for each network layer a packet traverses in his trip. The
headers of the physical, data-link and network layers are compulsory in the
Internet. However, at the transport layer level, there are basically two options:
(1) the TCP (Transmission Control Protocol), a reliable protocol from the
point of view of transmission errors and that avoids network congestion and
(2) the UDP (User Datagram Protocol), which basically provides datagram
transmission service. Apart from these differences, it should be noted that the
header overhead of both protocols is different: 20 bytes in the case of TCP
and 8 bytes in the case of UDP.
- Congestion control and latency: Internet is a shared medium and as such,
the bandwidth provided is unknown a priori because it depends mainly of
the bandwidth than other network users consume at that time. Furthermore,
when the demand for bandwidth is higher than the network can provide,
a phenomenon known as the network congestion occurs. This effect is a
consequence of the routers, the devices that decide the paths to be followed by
data packets, receive more data than can be process, and when this happens
the packets are simply destroyed. This behavior brings a serious negative
impact on overall network performance because, usually, a destroyed packet
means that sooner or later it will be retransmitted and thus it will contribute
to further congestion the network.
To avoid getting into this dangerous dynamic, the TCP provides a mechanism
for congestion avoidance which basically reduces the transmission rate if there
are indications that the network is congested. As a result of the reduction of
the transmission rate, users experience an increased latency in communication.
In the case of UDP, such a mechanism does not exist and is the responsibility
of the application to prevent the network congestion.