Congestion Control: Reading: Sections 6.1-6.4
Congestion Control: Reading: Sections 6.1-6.4
Congestion Control: Reading: Sections 6.1-6.4
1
Goals of Today’s Lecture
• Principles of congestion control
– Learning that congestion is occurring
– Adapting to alleviate the congestion
• Resource allocation
– How nodes meet competing demands for resources
– E.g., link bandwidth and buffer space
– When to say no, and to whom
• Congestion control
– How nodes prevent or respond to overload conditions
– E.g., persuade hosts to stop sending, or slow down
– Typically has notions of fairness (i.e., sharing the pain)
3
Flow Control vs. Congestion Control
• Flow control
– Keeping one fast sender from overwhelming a slow
receiver
• Congestion control
– Keep a set of senders from overloading the network
• Connectionless flows
– No notions of connections inside the network
– … and no advance reservation of network resources
– Still, you can view related packets as a group (“flow”)
– … e.g., the packets in the same TCP transfer
• Best-effort service
– No guarantees for packet delivery or delay
– No preferential treatment for certain packets
5
Congestion is Unavoidable
• Two packets arrive at the same time
– The node can only transmit one
– … and either buffer or drop the other
6
Congestion Collapse
• Definition: Increase in network load results
in a decrease of useful work done
• Many possible causes
–Spurious retransmissions of packets still in flight
Classical congestion collapse
Solution: better timers and TCP congestion control
–Undelivered packets
Packets consume resources and are dropped
elsewhere in network
Solution: congestion control for ALL traffic
7
What Do We Want, Really?
• High throughput
– Throughput: measured performance of a system
– E.g., number of bits/second of data that get through
• Low delay
– Delay: time required to deliver a packet or message
– E.g., number of msec to deliver a packet
8
Load, Delay, and Power
Typical behavior of queuing A simple metric of how well the
systems with random arrivals: network is performing:
Load
Power
Delay
Average Power
Packet delay
10
Simple Resource Allocation
• Simplest approach: FIFO queue and drop-tail
• Link bandwidth: first-in first-out queue
– Packets transmitted in the order they arrive
11
Simple Congestion Detection
• Packet loss
– Packet gets dropped along the way
• Packet delay
– Packet experiences high delay
12
Idea of TCP Congestion Control
• Each source determines the available capacity
– … so it knows how many packets to have in transit
• Congestion window
– Maximum # of unacknowledged bytes to have in transit
– The congestion-control equivalent of receiver window
– MaxWindow = min{congestion window, receiver window}
– Send at the rate of the slowest component
13
Additive Increase, Multiplicative Decrease
• Multiplicative decrease
– On loss of packet, divide congestion window in half
• Additive increase
– On success for last window of data, increase linearly
14
Leads to the TCP “Sawtooth”
Window
Loss
halved
15
Practical Details
• Congestion window
– Represented in bytes, not in packets (Why?)
– Packets have MSS (Maximum Segment Size) bytes
16
Getting Started
Need to start with a small CWND to avoid overloading the network.
Window
1 2 4 8
Src
D D D A A D D D D
A
A A A A
Dest
19
Slow Start and the TCP Sawtooth
Window
Loss
Exponential “slow t
start”
• Timeout
– Packet n is lost and detected via a timeout
– E.g., because all packets in flight were lost
– After the timeout, blasting away for the entire CWND
– … would trigger a very large burst in traffic
– So, better to start over with a low CWND
21
Repeating Slow Start After Timeout
Window
timeout
24
Motivation for Nagle’s Algorithm
• Interactive applications
– Telnet and rlogin
– Generate many small packets (e.g., keystrokes)
• Influence on performance
– Interactive applications: enables batching of bytes
– Bulk transfer: transmits in MSS-sized packets anyway 26
Motivation for Delayed ACK
• TCP traffic is often bidirectional
–Data traveling in both directions
–ACKs traveling in both directions
• Piggybacking is appealing
–Host B can send an ACK to host A
–… as part of a data packet from B to A
27
TCP Header Allows Piggybacking
Sequence number
Flags: SYN
Acknowledgment
FIN
RST HdrLen 0 Flags Advertised window
PSH
URG Checksum Urgent pointer
ACK Options (variable)
Data
28
Example of Piggybacking
A B
29
Increasing Likelihood of Piggybacking
• Increase piggybacking
– TCP allows the receiver to wait A B
to send the ACK
– … in the hope that the host will
have data to send
31
Queuing Mechanisms
32
Bursty Loss From Drop-Tail Queuing
• TCP depends on packet loss
– Packet loss is the indication of congestion
– In fact, TCP drives the network into packet loss
– … by continuing to increase the sending rate
33
Slow Feedback from Drop Tail
• Feedback comes when buffer is completely full
– … even though the buffer has been filling for a while
34
Random Early Detection (RED)
• Basic idea of RED
– Router notices that the queue is getting backlogged
– … and randomly drops packets to signal congestion
36
Problems With RED
• Hard to get the tunable parameters just right
– How early to start dropping packets?
– What slope for the increase in drop probability?
– What time scale for averaging the queue length?
• Many variations
– With cute names like “Blue” and “FRED”…
37
Explicit Congestion Notification
• Early dropping of packets
– Good: gives early feedback
– Bad: has to drop the packet to give the feedback