TCP Mon
TCP Mon
TCP Mon
multiplexers: TCP
queueing theory and
instability analysis
Gaurav Raina Damon Wischik Mark Handley
Cambridge UCL UCL
Sizing router buffers SIGCOMM 2004
Abstract. All Internet routers contain buffers to hold packets during times of congestion. Today, the size
of the buffers is determined by the dynamics of TCP's congestion control algorithm. In particular, the goal
is to make sure that when a link is congested, it is busy 100% of the time; which is equivalent to making
sure its buffer never goes empty. A widely used rule-of-thumb states that each link needs a buffer of size B
= RTT*C, where RTT is the average round-trip time of a flow passing across the link, and C is the data
rate of the link. For example, a 10Gb/s router linecard needs approximately 250ms*10Gb/s = 2.5Gbits of
buffers; and the amount of buffering grows linearly with the line-rate. Such large buffers are challenging
for router manufacturers, who must use large, slow, off-chip DRAMs. And queueing delays can be long,
have high variance, and may destabilize the congestion control algorithms. In this paper we argue that the
rule-of-thumb (B = RTT*C) is now outdated and incorrect for backbone routers. This is because of the
large number of flows (TCP connections) multiplexed together on a single backbone link. Using theory,
simulation and experiments on a network of real routers, we show that a link with N flows requires no
more than B = (RTT*C)/N, for long-lived or short-lived TCP flows. The consequences on router design
are enormous: A 2.5Gb/s link carrying 10,000 flows could reduce its buffers by 99% with negligible
difference in throughput; and a 10Gb/s link carrying 50,000 flows requires only 10Mbits of buffering,
which can easily be implemented using fast, on-chip SRAM.
http://tiny-tera.stanford.edu/~nickm/papers/index.html
Queue simulations
Simulate a queue
• fed by N flows, each of rate x pkts/sec (x=0.95 then 1.05 pkts/sec)
• served at rate NC (C=1 pkt/sec) 90
70
60 60
50 50
40 40
30 30
30
queue 20 20
20 20
size 10 10
10 10
arrival 1
0.5
1
0.5
1
0.5
1
0.5
rate x 20 40 60 80 20 40 60 80 20 40 60 80 20 40 60 80 time
N=50 N=100 N=500 N=1000
Fixed-Point Models for the End-to-End
Performance Analysis of IP Networks ITC
2000
Abstract. This paper presents a new approach to modeling end-to-end performance for IP
networks. Unlike earlier models, in which end stations generate traffic at a constant rate, the
work discussed here takes the adaptive behaviour of TCP/IP into account. The approach is
based on a fixed-point method which determines packet loss, link utilization and TCP
throughput across the network. Results are presented for an IP backbone network, which
highlight how this new model finds the natural operating point for TCP, which depends on
route lengths (via round-trip times and number of resources), end-to-end packet loss and the
number of user sessions.
http://www.statslab.cam.ac.uk/~frank/PAPERS/fpmee.html
Fixed-point analysis
-1
C*RTT=4 pkts
log10 of -2
pkt loss
C*RTT=20 pkts
probability -3
-4
C*RTT=100 pkts
Fluid-based Analysis of a Network of AQM Routers
Supporting TCP Flows with an Application to RED
SIGCOMM 2000
Abstract. In this paper we use jump process driven Stochastic Differential Equations to model the
interactions of a set of TCP flows and Active Queue Management routers in a network setting. We show
how the SDEs can be transformed into a set of Ordinary Differential Equations which can be easily solved
numerically. Our solution methodology scales well to a large number of flows. As an application, we
model and solve a system where RED is the AQM policy. Our results show excellent agreement with
those of similar networks simulated using the well known ns simulator. Our model enables us to get an
in-depth understanding of the RED algorithm. Using the tools developed in this paper, we present a
critical analysis of the RED algorithm. We explain the role played by the RED configuration parameters
on the behavior of the algorithm in a network. We point out a flaw in the RED averaging mechanism
which we believe is a cause of tuning problems for RED. We believe this modeling/solution methodology
has a great potential in analyzing and understanding various network congestion control algorithms.
ftp://gaia.cs.umass.edu/pub/Misra00_AQM.ps.gz
Stability/instability of fluid
model
1.4
arrival 1.2
1.4
1.2
0.8 20 40 60 80 100
0.6
time
-1
C*RTT=4 pkts
log10 of -2
pkt loss
C*RTT=20 pkts
probability -3
-4
C*RTT=100 pkts
Illustration: 20 flows
Standard TCP, single bottleneck link, no AQM
service=60 pkts/sec/flow, RTT=200 ms, #flows=20
-2
-1
-1
-1
-2
-6
0.5 1 1.5
Scalable TCP: improving performance
in highspeed wide area networks
SIGCOMM CCR 2003
Tom Kelly
CERN -- IT division
Abstract. TCP congestion control can perform badly in highspeed wide area networks because of its slow
response with large congestion windows. The challenge for any alternative protocol is to better utilize
networks with high bandwidth-delay products in a simple and robust manner without interacting badly
with existing traffic. Scalable TCP is a simple sender-side alteration to the TCP congestion window update
algorithm. It offers a robust mechanism to improve performance in highspeed wide area networks using
traditional TCP receivers. Scalable TCP is designed to be incrementally deployable and behaves
identically to traditional TCP stacks when small windows are sufficient. The performance of the scheme is
evaluated through experimental results gathered using a Scalable TCP implementation for the Linux
operating system and a gigabit transatlantic network. The preliminary results gathered suggest that the
deployment of Scalable TCP would have negligible impact on existing network traffic at the same time as
improving bulk transfer performance in highspeed wide area networks.
http://www-lce.eng.cam.ac.uk/~ctk21/scalable/
Higher utilization
b10 b20 b50
-1
-2
• A fixed buffer cannot give
-3
high utilization over all values
-4 of C*RTT
0.5 1 1.5
r 0.8 r 0.9
• By choosing
B=c1+c2*log(C*RTT)/log()
we get utilization
[Towsley]
r 0.95 r 0.99
-1
-2
-3
p
-4
0.5 1 1.5