Congestion Control

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Congestion Control

Congestion Control is a crucial concept in computer networks. It refers to the


methods used to prevent network overload and ensure smooth data flow. When
too much data is sent through the network at once, it can cause delays and data
loss. Congestion control techniques help manage the traffic, so all users can
enjoy a stable and efficient network connection. These techniques are essential
for maintaining the performance and reliability of modern networks.

What is Congestion?
Congestion in a computer network happens when there is too much data being
sent at the same time, causing the network to slow down. Just like traffic
congestion on a busy road, network congestion leads to delays and sometimes
data loss. When the network can’t handle all the incoming data, it gets
“clogged,” making it difficult for information to travel smoothly from one place
to another.

Effects of Congestion in Computer Network


• Improved Network Stability: Congestion control helps keep the
network stable by preventing it from getting overloaded. It manages
the flow of data so the network doesn’t crash or fail due to too much
traffic.
• Reduced Latency and Packet Loss: Without congestion control, data
transmission can slow down, causing delays and data loss.
Congestion control helps manage traffic better, reducing these delays
and ensuring fewer data packets are lost, making data transfer faster
and the network more responsive.
• Enhanced Throughput: By avoiding congestion, the network can use
its resources more effectively. This means more data can be sent in a
shorter time, which is important for handling large amounts of data
and supporting high-speed applications.
• Fairness in Resource Allocation: Congestion control ensures that
network resources are shared fairly among users. No single user or
application can take up all the bandwidth, allowing everyone to have a
fair share.
• Better User Experience: When data flows smoothly and quickly, users
have a better experience. Websites, online services, and applications
work more reliably and without annoying delays.
• Mitigation of Network Congestion Collapse: Without congestion
control, a sudden spike in data traffic can overwhelm the network,
causing severe congestion and making it almost unusable. Congestion
control helps prevent this by managing traffic efficiently and avoiding
such critical breakdowns.
Congestion control refers to the techniques used to control or prevent
congestion. Congestion control techniques can be broadly classified into two
categories:

Open Loop Congestion Control


Open loop congestion control policies are applied to prevent congestion before
it happens. The congestion control is handled either by the source or the
destination.

Policies adopted by open loop congestion control –


1. Retransmission Policy: It is the policy in which retransmission of the
packets are taken care of. If the sender feels that a sent packet is lost
or corrupted, the packet needs to be retransmitted. This transmission
may increase the congestion in the network. To prevent congestion,
retransmission timers must be designed to prevent congestion and also
able to optimize efficiency.
2. Window Policy: The type of window at the sender’s side may also
affect the congestion. Several packets in the Go-back-n window are
re-sent, although some packets may be received successfully at the
receiver side. This duplication may increase the congestion in the
network and make it worse. Therefore, Selective repeat window
should be adopted as it sends the specific packet that may have been
lost.
3. Discarding Policy: A good discarding policy adopted by the routers is
that the routers may prevent congestion and at the same time partially
discard the corrupted or less sensitive packages and also be able to
maintain the quality of a message. In case of audio file transmission,
routers can discard less sensitive
packets to prevent congestion and also maintain the quality of the
audio file.
4. Acknowledgment Policy: Since acknowledgements are also the part
of the load in the network, the acknowledgment policy imposed by the
receiver may also affect congestion. Several approaches can be used
to prevent congestion related to acknowledgment. The receiver should
send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send an
acknowledgment only if it has to send a packet or a timer expires.
5. Admission Policy: In admission policy a mechanism should be used
to prevent congestion. Switches in a flow should first check the
resource requirement of a network flow before transmitting it further.
If there is a chance of a congestion or there is a congestion in the
network, router should deny establishing a virtual network connection
to prevent further congestion.

All the above policies are adopted to prevent congestion before it happens in
the network.

Closed Loop Congestion Control


Closed loop congestion control techniques are used to treat or alleviate
congestion after it happens. Several techniques are used by different protocols;
some of them are:
1. Backpressure: Backpressure is a technique in which a congested node stops
receiving packets from upstream node. This may cause the upstream node or
nodes to become congested and reject receiving data from above nodes.
Backpressure is a node-to-node congestion control technique that propagate in
the opposite direction of data flow. The backpressure technique can be applied
only to virtual circuit where each node has information of its above upstream
node.
In above diagram the 3rd node is congested and stops receiving packets
as a result 2nd node may be get congested due to slowing down of the output
data flow. Similarly 1st node may get congested and inform the source to slow
down.

2. Choke Packet Technique: Choke packet technique is applicable to both


virtual networks as well as datagram subnets. A choke packet is a packet sent
by a node to the source to inform it of congestion. Each router monitors its
resources and the utilization at each of its output lines. Whenever the resource
utilization exceeds the threshold value which is set by the administrator, the
router directly sends a choke packet to the source giving it a feedback to reduce
the traffic. The intermediate nodes through which the packets has traveled are
not warned about congestion.

3. Implicit Signaling: In implicit signaling, there is no communication between


the congested nodes and the source. The source guesses that there is congestion
in a network. For example when sender sends several packets and there is no
acknowledgment for a while, one assumption is that there is a congestion.

4. Explicit Signaling: In explicit signaling, if a node experiences congestion it


can explicitly sends a packet to the source or destination to inform about
congestion. The difference between choke packet and explicit signaling is that
the signal is included in the packets that carry data rather than creating a
different packet as in case of choke packet technique. Explicit signaling can
occur in either forward or backward direction.
• Forward Signaling: In forward signaling, a signal is sent in the
direction of the congestion. The destination is warned about
congestion. The receiver in this case adopt policies to prevent further
congestion.
• Backward Signaling: In backward signaling, a signal is sent in the
opposite direction of the congestion. The source is warned about
congestion and it needs to slow down.

You might also like