Satpep: A TCP Performance Enhancing Proxy For Satellite Links

Download as ps, pdf, or txt
Download as ps, pdf, or txt
You are on page 1of 6

SaTPEP: a TCP Performance Enhancing Proxy

for Satellite Links

Dimitris Velenis, Dimitris Kalogeras, and Basil Maglaris

Department of Electrical and Computer Engineering, Network Management and


Optimal Design (NETMODE) Laboratory, National Technical University of Athens,
Heroon Politechniou 9, Zographou, 157 80, Athens, Greece
{dbelen,dkalo,maglaris}@netmode.ece.ntua.gr

Abstract. Satellite link characteristics cause reduced performance in


TCP data transfers. In this paper we present SaTPEP, a TCP Perfor-
mance Enhancing Proxy which attempts to improve TCP performance
by performing connection splitting. SaTPEP monitors the satellite link
utilization, and assigns to connections window values that reflect the
available bandwidth. Loss recovery is based on Negative Acknowledge-
ments. The performance of SaTPEP is investigated in terms of goodput
and fairness, through a series of simulation experiments. Results ob-
tained in these experiments, show significant performance improvement
in presence of available bandwidth and at high error rates. 1

1 Introduction
Satellite link characteristics, namely long propagation delays, large bandwidth ·
delay products, and high bit error rates, affect the performance of TCP, the
dominant transport layer protocol in the Internet. In network paths with large
bandwidth · delay products, TCP needs a considerable amount of time to set its
congestion window, cwnd, to the appropriate value [1]. Furthermore, TCP reacts
to segment drops by lowering cwnd [2]. When drops are caused by transmission
errors, TCP unnecessarily reduces its transmission rate.
Several methods to overcome those problems are listed in [3] and [4]. Many
of them employ end-to-end mechanisms. Others try to increase performance
by mechanisms implemented at certain points in the path between the TCP
endpoints [5], [6]. The Satellite Transport Protocol, STP [7], may be used either
in a split TCP connection over the satellite part of a network, or as a transport
layer protocol within a satellite network. TCP-Peach [8] attempts to improve
end-to-end TCP performance in a priority aware environment.
In this paper we introduce SaTPEP, a TCP Performance Enhancing Proxy
that aims at increasing the performance of TCP over single-hop satellite links.
SaTPEP’s flow control is based on link utilization measurements, and segment
loss is handled with Negative Acknowledgements (NACKs). The remainder of
the paper is organized as follows: In Section 2 we describe the design of SaTPEP.
1
This work is partially supported by OTE S.A. R&D laboratories
In Section 3 we present simulation results obtained by a SaTPEP model in the
ns [9] simulator. Section 4 concludes the paper.

2 Satellite TCP Performance Enhancing Proxy - SaTPEP


SaTPEP consists of the two gateways at each end of a bidirectional satellite
link (or a unidirectional satellite forward link and a reverse terestrial link). Ev-
ery TCP connection traversing the link is split as follows: One connection is
established between the TCP sender and the Uplink Gateway (UG), another
one between the two gateways, called the SaT P EP connection, and a third one
between the Downlink Gateway (DG) and the TCP receiver.
In order to improve performance over the satellite hop, the SaTPEP con-
nection performs flow control based on link utilization measurements, and error
recovery with Negative Acknowledgements (NACKs).

2.1 Flow Control


A SaTPEP connection begins with the standard TCP three-way handshake. The
SaTPEP sender (UG) does not perform any cwnd calculations. It just sets cwnd
to rwnd, the window value advertised by the SaTPEP receiver (DG). SaTPEP
measures window values in MSS-sized segments, rather than in bytes. At the
beginning of a SaTPEP connection, the sender sets rwnd to 1. On receipt of
the SYN-ACK segment, rwnd is set to the value in the window field of the
TCP header. This value is calculated by DG as the minimum of the available
buffer space for incoming data, and a window value calculated using the link
utilization measurement. Given the link capacity, DG can measure its utilization
by measuring the incoming data throughput. This throughput measurement is
based on the Packet Pair algorithm [10], and it is performed over all received IP
traffic. A timer, the idle timer, is used to handle periods of link inactivity. When
it expires the throughput measurement is set to zero.
Whenever a measurement is completed, DG calculates the available band-
width by subtracting the throughput measurement from the total bandwidth
of the link. The available bandwidth multiplied by the link RTT denotes how
much more data the link can attain. We call this value Available Window, AW .
AW is distributed to the connections as an increment to their rwnd values. DG
may use a wide variety of criteria to distribute AW to its connections. It might
implement policies that favor certain types of traffic, or certain hosts over others.
In the present paper we propose an algorithm for distributing AW in a fair
manner to all active connections. A connection is characterized as active, if it
has transmitted at least one data segment during the last throughput measure-
ment. Non-active connections do not get a share from AW . Assuming n active
connections, the rwnd value of the k-th connection is incremented by a value
drwndk , defined in equation (1). Note that the drwndk · M SSk products, of all
active connections, sum up to AW .
Pn
AW 2 · i=1 rwndi − n · rwndk
drwndk = · Pn . (1)
M SSk n · i=1 rwndi
When AW is distributed to the active connections, connections with smaller
rwnd values receive larger drwnd, leading to a steady state of fair bandwidth
distribution among all active connections. The rwndk value is limited by a max-
imum value, max rwndk , defined in equation (2), where c ≤ 1, and del the link
round-trip propagation delay. Constant c accounts for data that is released from
the SaTPEP socket buffer to the IP layer and has not yet been transmitted.
(1 + c) bw · del
max rwndk = · . (2)
n M SSk
Whenever a connection becomes idle, its rwnd is reset to 1 segment. By
setting its cwnd to rwnd, the SaTPEP sender transmits data bursts much larger
than a TCP sender. With these bursts transmitted over a one-hop path, a buffer
size of at least the bandwidth · delay product of the satellite link is enough to
assure that the link will not experience congestion.

2.2 Loss Recovery


The SaTPEP flow control mechanism guaranties that the link will not experience
congestion. Therefore, segment drops are only caused by errors, and SaTPEP
does not reduce cwnd when loss is detected. The SaTPEP sender enters recovery
mode when the first duplicate acknowledgement, dupACK, is received, since
there can be no segment reordering on a single-hop connection. While in recovery
mode, cwnd is inflated by the amount of dupACKs, dupwnd, received. Recovery
ends when recover, the highest sequence number transmitted when the first
dupACK arrived, is acknowledged.
The SaTPEP receiver notifies the sender of missing segments by means of
N egative Acknowledgements, N ACKs. NACKs are included in dupACKs as
a TCP option, describing a contiguous missing part of the receiver data stream
with sequence numbers lower than the maximum sequence number received. The
receiver transmits increasingly sequenced NACKs in successive dupACKs, and
repeats the same NACKs in a cyclic manner until the data they describe is
received.
On receipt of a NACK, the SaTPEP sender retransmits the requested seg-
ment(s) along with as much new data as the inflated cwnd allows. The sender will
not respond to repeated NACKs until a counter, rtx count, expires. rtx count
is set to rwnd − 1 on receipt of the first dupACK and is decreased by 1 for
every dupACK received. It is an estimate of the expected number of dupACKs
that will be received before the retransmission reaches the receiver. As long as
rtx count > 0, the retransmitted segment cannot have reached the receiver,
and there is no point in repeating the retransmission. When rtx count expires,
rtx count is reset and the sender repeats a complete cycle of all retransmissions
still requested by incoming NACKs.
SaTPEP also utilizes TCP’s Retransmission Timer. Whenever the timer ex-
pires the SaTPEP sender sets rwnd to 1, dupwnd to 0 and retransmits the
segment requested by the last ACK received. Figure 1 describes more formally
the loss recovery algorithm implemented at the SaTPEP sender.
Initially: dupwnd = 0, recover = null, hinack = null, hiack = null
rtx count = null, rtx allow = 0, rtx stop = null
On arrival of 1st dupACK:
– hiack ← dupACK’s ack no, recover ← highest transmitted seq no, dupwnd ← 1
– if rtx count is null or 0: rtx count ← rwnd − 1
– hinack ← NACK’s highest seq no
– retransmit segment(s) requested in NACK option
– reduce dupwnd by number of segments retransmitted
– transmit new data (as much as cwnd allows)
On arrival of any dupACK:
– dupwnd ← dupwnd + 1, rtx count ← rtx count − 1
– if NACK’s highest seq no > hinack and rtx allow=0:
• retransmit what NACK requests, hinack ← NACK’s highest seq no
– else if rtx allow=1 and NACK does not contain rtx stop:
• retransmit segment(s) requested in NACK option
• if hinack > NACK’s highest seq no: hinack ← NACK’s highest seq no
– else if rtx allow=1 and NACK contains rtx stop:
• rtx allow ← 0, rtx stop ← null, rtx count ← rwnd
– reduce dupwnd by amount of data retransmitted
– if rtx count = 0: rtx allow ← 1, rtx stop ← NACK’s highest seq no
– transmit new data (as much as cwnd allows)
On arrival of Partial ACK:
– reduce dupwnd by Partial ACK’s ack no - hiack
– hiack ← Partial ACK’s ack no, Perform actions for any dupACK
On arrival of New ACK:
– dupwnd ← 0, recover ← null, hiack ← New ACK’s ack no,
rtx count ← null, rtc allow ← 0

Fig. 1. SaTPEP sender reaction to Duplicate Acknowledgements

3 Simulation Experiments

We evaluate the performance of SaTPEP in comparison to SACK-TCP, in a se-


ries of simulation experiments using the ns simulator [9]. A bi-directional GEO
satellite link is used to establish communication between N data senders and
N receivers. The data senders are connected to the Uplink Gateway, and the
receivers to the Downlink Gateway. They perform bulk data transfers of various
file sizes. The packet size is set to 1500 bytes. The propagation delay of the satel-
lite link is set to 275ms, resulting in a RTT of 550ms between UG and DG. Link
capacity values range from 2 to 10M bps. All other links have a propagation delay
of 1ms, and their capacity is set to 10M bps, or 100M bps in the case of a 10M bps
satellite link. The packet loss probability, Ploss , of the satellite link, ranges from
10−6 to 10−2 . All other links are error-free. Queue sizes are set to 600 packets
for all links, so that end-to-end TCP transfers do not experience congestion loss.
The focus of our comparison is on goodput (f ile size/connection duration), as
perceived by the receiver hosts, and on fairness between multiple simultaneous
connections.
In the first series of experiments N is set to 1. All other parameters cover
the full ranges already mentioned. In order for TCP to be able to eventually
fully utilize the satellite link, we have set the TCP rwnd to rather high values,
from 100 to 500 segments. Figure 2 depicts goodput achieved by SaTPEP and
TCP for different Ploss values. The file size is 1M byte and the link capacity
6M bps. SaTPEP performs significantly better than TCP because it is able to
fully utilize the link after the first RTT of the connection. Frequent losses cause
TCP’s cwnd to remain low, while SaTPEP still raises cwnd high enough to fully
utilize the link. Figure 2 also depicts the goodput ratio for SaTPEP to TCP,
which rises significantly for Ploss = 10−2 . For a given Ploss value, SaTPEP’s
performance increases for higher file sizes and link capacities, as shown in figure
3. Both graphs are obtained for Ploss = 10−3 .

500 20
SaTPEP SaTPEP/TCP
450 TCP 18
400 16
Goodput (kbytes/sec)

350 14

Goodput Ratio
300 12
250 10
200 8
150 6
100 4
50 2
0 0
1e-06 1e-05 1e-04 1e-03 1e-02 1e-06 1e-05 1e-04 1e-03 1e-02
Packet Loss Probability Packet Loss Probability

Fig. 2. Goodput and Goodput Ratio for different Ploss values

800 800
SaTPEP SaTPEP
700 TCP 700 TCP
Goodput (kbytes/sec)

Goodput (kbytes/sec)

600 600

500 500

400 400

300 300

200 200

100 100

0 0
0 1 2 3 4 5 6 7 8 9 10 2 3 4 5 6 7 8 9 10
File Size (Mbytes) Link Capacity (Mbps)

Fig. 3. Goodput for different file size and link capacity values

In the second series of experiments, we set N to 21 and the link capacity


to 6M bps. At time t1 = 1sec twenty senders begin transmission of a 2M byte
file each. At time t2 = 10sec the 21st sender begins transmission of a 500kbyte
file. The rwnd value for TCP connections is set to 25 segments, high enough
to result in full utilization of the link, without causing congestion during the
initial Slow Start phase. Figure 4 depicts goodput achieved by each of the initial
twenty connections for Ploss = 10−3 . It is clear that SaTPEP distributes the
link capacity in an even more fair manner than TCP does. The average goodput
achieved by the twenty initial connections, along with the goodput of the 21st
connection, is shown in figure 4 for different Ploss values.
50 50
SaTPEP SaTPEP-avg
TCP TCP-avg
45 45 SaTPEP-21
Goodput (kbytes/sec) TCP-21

Goodput (kbytes/sec)
40 40

35 35

30 30

25 25

20 20

15 15
2 4 6 8 10 12 14 16 18 20 1e-06 1e-05 1e-04 1e-03 1e-02
Connection Packet Loss Probability

Fig. 4. Goodput of 20 simultaneous connections. Average Goodput of 20 connections,


and Goodput of connection 21 for different Ploss values

4 Conclusion
In this paper we introduced SaTPEP, aiming at improving TCP performance
over satellite links. SaTPEP’s flow control is based on link utilization measure-
ments. Loss recovery is based on Negative Acknowledgements. Simulation ex-
periments show significant performance improvement over TCP, in presence of
available link capacity, and under high error rates. Under heavy traffic, SaTPEP
exhibits remarkable fairness between simultaneous connections.

References
1. C. Partridge and T. Shepard, “TCP/IP Performance over Satellite Links,” IEEE
Network Mag., pp. 44–49, Sept. 1997.
2. V. Jacobson, “Congestion Avoidance and Control,” in Proc. ACM SIGCOMM,
Stanford, CA USA, Aug. 1988.
3. M. Allman, D. Glover, and L. Sanchez, “Enhancing TCP over Satellite Channels
using Standard Mechanisms,” RFC 2488, Jan. 1999.
4. M. Allman, S. Dawkins, D. Glover, J. Griner, D. Tran, T. Henderson, J. Heide-
mann, J. Touch, H. Kruse, S. Ostermann, K. Scott, and J. Semke, “Ongoing TCP
Research Related to Satellites,” RFC 2760, Aug. 2000.
5. J. Border, M. Kojo, J.Griner, G. Montenegro, and Z. Shelby, “Performance En-
hancing Proxies Intended to Mitigate Link-Related Degradations,” RFC 3135, June
2001.
6. I. Minei and R. Cohen, “High-Speed Internet Access Through Unidirectional Geo-
stationary Satellite Channels,” IEEE JSAC, vol. 17, no. 2, pp. 345–359, Feb. 1999.
7. T. Henderson and R. Katz, “Transport Protocols for Internet-Compatible Satellite
Networks,” IEEE JSAC, vol. 17, no. 2, pp. 326–344, Feb. 1999.
8. I. Akyildiz, G. Morabito, and S. Palazzo, “TCP-Peach: A New Congestion Control
Scheme for Satellite IP Networks,” IEEE/ACM Transactions on Networking, vol.
9, no. 3, pp. 307–321, June 2001.
9. “NS (Network Simulator),” http://www.isi.edu/nsnam/ns/.
10. S. Keshav, “A Control-Theoretic Approach to Flow Control,” in Proc. ACM
SIGCOMM, Zurich, Switzerland, Sept. 1991.

You might also like