Network Performance - 584
Network Performance - 584
Network Performance - 584
Submitted to:
Submitted By:
Bandwidth and throughput are subtly different terms. First of all, bandwidth is
literally a measure of the width of a frequency band. For example, legacy voice-
grade telephone lines supported a frequency band ranging from 300 to 3300 Hz;
it was said to have a bandwidth of 3300 Hz - 300 Hz = 3000 Hz. If you see the
word bandwidth used in a situation in which it is being measured in hertz, then it
probably refers to the range of signals that can be accommodated.
While you can talk about the bandwidth of the network as a whole, sometimes
you want to be more precise, focusing, for example, on the bandwidth of a single
physical link or of a logical process-to-process channel. At the physical level,
bandwidth is constantly improving, with no end in sight. Intuitively, if you think
of a second of time as a distance you could measure with a ruler and bandwidth
as how many bits fit in that distance, then you can think of each bit as a pulse of
some width. For example, each bit on a 1-Mbps link is 1 μs wide, while each bit on
a 2-Mbps link is 0.5 μs wide, as illustrated in Figure 16. The more sophisticated
the transmitting and receiving technology, the narrower each bit can become and,
thus, the higher the bandwidth. For logical process-to-process channels,
bandwidth is also influenced by other factors, including how many times the
software that implements the channel has to handle, and possibly transform,
each bit of data.
We often think of latency as having three components. First, there is the speed-
of-light propagation delay. This delay occurs because nothing, including a bit on a
wire, can travel faster than the speed of light. If you know the distance between
two points, you can calculate the speed-of-light latency, although you have to be
careful because light travels across different media at different speeds: It travels
at 3.0 × 108 m/s in a vacuum, 2.3 × 108 m/s in a copper cable, and 2.0 × 108 m/s
in an optical fiber. Second, there is the amount of time it takes to transmit a unit
of data. This is a function of the network bandwidth and the size of the packet in
which the data is carried. Third, there may be queuing delays inside the network,
since packet switches generally need to store packets for some time before
forwarding them on an outbound link. So, we could define the total latency as
Propagation = Distance/SpeedOfLight
Transmit = Size/Bandwidth
where Distance is the length of the wire over which the data will
travel, SpeedOfLight is the effective speed of light over that wire, Size is the size
of the packet, and Bandwidth is the bandwidth at which the packet is
transmitted. Note that if the message contains only one bit and we are talking
about a single link (as opposed to a whole network), then
the Transmit and Queue terms are not relevant, and latency corresponds to the
propagation delay only.
In contrast, consider a digital library program that is being asked to fetch a 25-
megabyte (MB) image—the more bandwidth that is available, the faster it will be
able to return the image to the user. Here, the bandwidth of the channel
dominates performance. To see this, suppose that the channel has a bandwidth of
10 Mbps. It will take 20 seconds to transmit the image (25 × 106 × 8-bits / (10 ×
106 Mbps = 20 seconds), making it relatively unimportant if the image is on the
other side of a 1-ms channel or a 100-ms channel; the difference between a
20.001-second response time and a 20.1-second response time is negligible.