Bandwidth, Throughput and Delay in Computer Networks
Introduction
Bandwidth
Bandwidth, in computer networks and in digital systems in general, between two given nodes is the maximal amount of data per unit time that can be transmitted from one node to the other. Digital bandwidth is synonymous with bit rate and data rate.
The actual bandwidth of a network is determined by a combination of the physical media and the technologies chosen for signaling and detecting network signals. Current information about the physics of unshielded twisted-pair (UTP) copper cable puts the theoretical bandwidth limit at over 10 Gbps. However, in practice, the actual bandwidth is determined by the signaling methods, NICs, and other active network equipment that is chosen. Therefore, the bandwidth is not determined solely by the limitations of the medium.
Throughput
Throughput defines how much useful data can be transmitted per unit time. It is equal to the bandwidth if there is no communication protocol. However, in most cases the throughput is less than the bandwidth for two reasons:
- Protocol overhead: Protocols use some bytes to transmit protocol information. This reduces the throughput
- Protocol waiting times: some protocols may force network equipment to wait for some event.
Delay (Round Trip Time – RTT)
Delay, also called Round Trip Time – RTT for a packet within the network is composed of the following parts:
- Processing delay – the time routers take to process the packet header.
- Queuing delay – the time the packet sits in routing queues.
- Transmission delay – time it takes to push the packet’s bits onto the link
- Propagation delay – the time taken by the front of a signal to reach the destination. Propagation of an electromagnetic signal is the speed of light (also called celerity). It depends on the wavelength and the medium in which the signal is propagating. For example, the propagation of signal in a twisted pair is 0.66c where c is the speed of light.