Network Performance - 584

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

Assignment-1

Topic of assignment/Title Network Performance

Submitted to:

Name of Teacher Muhammad Naeem Abbas

Submitted By:

Name of the Student Muhammad Awais Ejaz Roll No. 584

Course Code CSI-512 Title Computer Networks

BS Program Computer Science Semester 6th Eve

Date of submission 08/15/2020


Network Performance
Network performance is primarily measured from an end-user perspective
(i.e. quality of network services delivered to the user). Broadly, network
performance is measured by reviewing the statistics and metrics from the
following network components:

 Network bandwidth or capacity - Available data transfer


 Network throughput - Amount of data successfully transferred over
the network in a given time
 Network delay, latency and jittering - Any network issue causing
packet transfer to be slower than usual
 Data loss and network errors - Packets dropped or lost in
transmission and delivery

What Can Hinder Optimal Network


Performance?
Just as effective information management is no cake walk, improving
networking performance isn’t any easier. Networks are complex systems and
often don’t take kindly to change. That’s because most networks are a
Pandora’s box of different tools and shared resources, all operating in
tandem. And much like the single thread that has the potential to unravel an
entire sweater, ‘pulling’ any single part of your network can have a negative
effect on the whole.
Some common challenges include:

 Hardware and equipment updates can be costly and time-


consuming: Your network isn’t some single-system device that you can
just replace when the next version gets released. When one attempts to
upgrade their network performance, often the end result is
a broken network. To offset this risk, businesses invest in costly and
time-consuming IT resources.
 New equipment may not function with existing infrastructure: We
want faster, more efficient options, and vendors are more than happy to
provide them. However, the newest and the best isn’t always backwards
compatible. And when they’re not, the ecosystem of tools that are
running on the infrastructure can easily fall apart.

How network performance is measured?

Bandwidth and Latency


Network performance is measured in two fundamental ways: bandwidth (also
called throughput) and latency (also called delay). The bandwidth of a network
is given by the number of bits that can be transmitted over the network in a
certain period of time. For example, a network might have a bandwidth of 10
million bits/second (Mbps), meaning that it is able to deliver 10 million bits every
second. It is sometimes useful to think of bandwidth in terms of how long it takes
to transmit each bit of data. On a 10-Mbps network, for example, it takes 0.1
microsecond (μs) to transmit each bit.

Bandwidth and throughput are subtly different terms. First of all, bandwidth is
literally a measure of the width of a frequency band. For example, legacy voice-
grade telephone lines supported a frequency band ranging from 300 to 3300 Hz;
it was said to have a bandwidth of 3300 Hz - 300 Hz = 3000 Hz. If you see the
word bandwidth used in a situation in which it is being measured in hertz, then it
probably refers to the range of signals that can be accommodated.

When we talk about the bandwidth of a communication link, we normally refer to


the number of bits per second that can be transmitted on the link. This is also
sometimes called the data rate. We might say that the bandwidth of an Ethernet
link is 10 Mbps. A useful distinction can also be made, however, between the
maximum data rate that is available on the link and the number of bits per
second that we can actually transmit over the link in practice. We tend to use the
word throughput to refer to the measured performance of a system. Thus,
because of various inefficiencies of implementation, a pair of nodes connected by
a link with a bandwidth of 10 Mbps might achieve a throughput of only 2 Mbps.
This would mean that an application on one host could send data to the other
host at 2 Mbps.

Finally, we often talk about the bandwidth requirements of an application. This is


the number of bits per second that it needs to transmit over the network to
perform acceptably. For some applications, this might be “whatever I can get”;
for others, it might be some fixed number (preferably not more than the available
link bandwidth); and for others, it might be a number that varies with time. We
will provide more on this topic later in this section.

While you can talk about the bandwidth of the network as a whole, sometimes
you want to be more precise, focusing, for example, on the bandwidth of a single
physical link or of a logical process-to-process channel. At the physical level,
bandwidth is constantly improving, with no end in sight. Intuitively, if you think
of a second of time as a distance you could measure with a ruler and bandwidth
as how many bits fit in that distance, then you can think of each bit as a pulse of
some width. For example, each bit on a 1-Mbps link is 1 μs wide, while each bit on
a 2-Mbps link is 0.5 μs wide, as illustrated in Figure 16. The more sophisticated
the transmitting and receiving technology, the narrower each bit can become and,
thus, the higher the bandwidth. For logical process-to-process channels,
bandwidth is also influenced by other factors, including how many times the
software that implements the channel has to handle, and possibly transform,
each bit of data.

The second performance metric, latency, corresponds to how long it takes a


message to travel from one end of a network to the other. (As with bandwidth, we
could be focused on the latency of a single link or an end-to-end channel.)
Latency is measured strictly in terms of time. For example, a transcontinental
network might have a latency of 24 milliseconds (ms); that is, it takes a message
24 ms to travel from one coast of North America to the other. There are many
situations in which it is more important to know how long it takes to send a
message from one end of a network to the other and back, rather than the one-
way latency. We call this the round-trip time (RTT) of the network.

We often think of latency as having three components. First, there is the speed-
of-light propagation delay. This delay occurs because nothing, including a bit on a
wire, can travel faster than the speed of light. If you know the distance between
two points, you can calculate the speed-of-light latency, although you have to be
careful because light travels across different media at different speeds: It travels
at 3.0 × 108 m/s in a vacuum, 2.3 × 108 m/s in a copper cable, and 2.0 × 108 m/s
in an optical fiber. Second, there is the amount of time it takes to transmit a unit
of data. This is a function of the network bandwidth and the size of the packet in
which the data is carried. Third, there may be queuing delays inside the network,
since packet switches generally need to store packets for some time before
forwarding them on an outbound link. So, we could define the total latency as

Latency = Propagation + Transmit + Queue

Propagation = Distance/SpeedOfLight

Transmit = Size/Bandwidth

where  Distance  is the length of the wire over which the data will
travel,  SpeedOfLight  is the effective speed of light over that wire,  Size  is the size
of the packet, and  Bandwidth  is the bandwidth at which the packet is
transmitted. Note that if the message contains only one bit and we are talking
about a single link (as opposed to a whole network), then
the  Transmit  and  Queue  terms are not relevant, and latency corresponds to the
propagation delay only.

Bandwidth and latency combine to define the performance characteristics of a


given link or channel. Their relative importance, however, depends on the
application. For some applications, latency dominates bandwidth. For example, a
client that sends a 1-byte message to a server and receives a 1-byte message in
return is latency bound. Assuming that no serious computation is involved in
preparing the response, the application will perform much differently on a
transcontinental channel with a 100-ms RTT than it will on an across-the-room
channel with a 1-ms RTT. Whether the channel is 1 Mbps or 100 Mbps is
relatively insignificant, however, since the former implies that the time to
transmit a byte ( Transmit ) is 8 μs and the latter implies  Transmit  = 0.08 μs.

In contrast, consider a digital library program that is being asked to fetch a 25-
megabyte (MB) image—the more bandwidth that is available, the faster it will be
able to return the image to the user. Here, the bandwidth of the channel
dominates performance. To see this, suppose that the channel has a bandwidth of
10 Mbps. It will take 20 seconds to transmit the image (25 × 106 × 8-bits / (10 ×
106 Mbps = 20 seconds), making it relatively unimportant if the image is on the
other side of a 1-ms channel or a 100-ms channel; the difference between a
20.001-second response time and a 20.1-second response time is negligible.

Figure 17 gives you a sense of how latency or bandwidth can dominate


performance in different circumstances. The graph shows how long it takes to
move objects of various sizes (1 byte, 2 KB, 1 MB) across networks with RTTs
ranging from 1 to 100 ms and link speeds of either 1.5 or 10 Mbps. We use
logarithmic scales to show relative performance. For a 1-byte object (say, a
keystroke), latency remains almost exactly equal to the RTT, so that you cannot
distinguish between a 1.5-Mbps network and a 10-Mbps network. For a 2-KB
object (say, an email message), the link speed makes quite a difference on a 1-ms
RTT network but a negligible difference on a 100-ms RTT network. And for a 1-
MB object (say, a digital image), the RTT makes no difference—it is the link speed
that dominates performance across the full range of RTT.

Note that throughout this book we use the terms latency and delay in a generic


way to denote how long it takes to perform a particular function, such as
delivering a message or moving an object. When we are referring to the specific
amount of time it takes a signal to propagate from one end of a link to another,
we use the term propagation delay. Also, we make it clear in the context of the
discussion whether we are referring to the one-way latency or the round-trip
time.

As an aside, computers are becoming so fast that when we connect them to


networks, it is sometimes useful to think, at least figuratively, in terms
of instructions per mile. Consider what happens when a computer that is able to
execute 100 billion instructions per second sends a message out on a channel
with a 100-ms RTT. (To make the math easier, assume that the message covers a
distance of 5000 miles.) If that computer sits idle the full 100 ms waiting for a
reply message, then it has forfeited the ability to execute 10 billion instructions,
or 2 million instructions per mile. It had better have been worth going over the
network to justify this waste.

Why is network performance important?

A network that has been set up correctly will help improve your


businesses through technology performance, reducing overall costs and
enabling focus on your internal IT resources so you can focus on business
growth initiatives - rather than building and
managing network infrastructure.
What can affect network performance?

The performance of a network can be affected by various factors:

 The number of devices on the network.


 The bandwidth of the transmission medium.
 The type of network traffic.
 Network latency.
 The number of transmission errors.

How does bandwidth affect network performance?

The higher the bandwidth, the faster the data can be transferred,


which also means the faster your website can load. If your website is not
supported with sufficient bandwidth, it will take more time for your
website to load completely and hurt your website performance.

You might also like