Let's Make TCP Faster - The Official Google Code Blog

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Let's make TCP faster - The official Google Code blog http://googlecode.blogspot.com/2012/01/lets-make-tcp-faster.

html

Get the latest updates on Google APIs and developer tools.

Monday, January 23, 2012


Search This Blog
Let's make TCP faster
By Yuchung Cheng, Make The Web Faster Team

Transmission Control Protocol (TCP), the workhorse of the Internet, is designed to Subscribe
deliver all the Webs content and operate over a huge range of network types. To Site Feed
deliver content effectively, Web browsers typically open several dozen parallel
TCP connections ahead of making actual requests. This strategy overcomes
inherent TCP limitations but results in high latency in many situations and is not scalable.

Our research shows that the key to reducing latency is saving round trips. Were experimenting with
several improvements to TCP. Heres a summary of some of our recommendations to make TCP Blog Archive
faster:

1. Increase TCP initial congestion window to 10 (IW10). The amount of data sent at the beginning of a
TCP connection is currently 3 packets, implying 3 round trips (RTT) to deliver a tiny 15KB-sized Keep up with Google Code
content. Our experiments indicate that IW10 reduces the network latency of Web transfers by over Google Code tweets
10%. Google Code YouTube channel
Google Developer blogs
2. Reduce the initial timeout from 3 seconds to 1 second. An RTT of 3 seconds was appropriate a
couple of decades ago, but todays Internet requires a much smaller timeout. Our rationale for this
change is well documented here.
More blogs from Google
3. Use TCP Fast Open (TFO). For 33% of all HTTP requests, the browser needs to first spend one Visit our directory for more
RTT to establish a TCP connection with the remote peer. Most HTTP responses fit in the initial TCP information about Google blogs.
congestion window of 10 packets, doubling response time. TFO removes this overhead by including
the HTTP request in the initial TCP SYN packet. Weve demonstrated TFO reducing Page Load time Pages
by 10% on average, and over 40% in many situations. Our research paper and internet-draft address Home
concerns such as dropped packets and DOS attacks when using TFO.

4. Use Proportional Rate Reduction for TCP (PRR). Packet losses indicate the network is in disorder
or is congested. PRR, a new loss recovery algorithm, retransmits smoothly to recover losses during
network congestion. The algorithm is faster than the current mechanism by adjusting the transmission
rate according to the degree of losses. PRR is now part of the Linux kernel and is in the process of
becoming part of the TCP standard.

In addition, we are developing algorithms to recover faster on noisy mobile networks, as well as a
guaranteed 2-RTT delivery during startup. All our work on TCP is open-source and publicly available.
We disseminate our innovations through the Linux kernel, IETF standards proposals, and research
publications. Our goal is to partner with industry and academia to improve TCP for the whole Internet.
Please watch this blog and http://code.google.com/speed/ for further information.

Yuchung Cheng works on the transport layer to make the Web faster. He believes the current
transport layer badly needs an overhaul to catch up with other (networking) technologies. He can be
reached at [email protected].

Posted by Scott Knaster, Editor

at 1/23/2012 10:00:00 AM

94 people +1'd this


Labels: faster web

23 comments:

1 of 5 1/24/2012 8:46 AM
Let's make TCP faster - The official Google Code blog http://googlecode.blogspot.com/2012/01/lets-make-tcp-faster.html

Rich Jones Jan 23, 2012 12:14 PM


This is awesome work. I'm glad that Google is putting their vast engineering
resources to such good use.
Rich, Gun.io
Reply

Clemens Harten Jan 23, 2012 12:24 PM


It is impressive to see how simple changes (justified by extensive research) can lead
to a great improvement of the protocol. Well done!
Reply

Iamien Jan 23, 2012 12:33 PM


It would be nifty if someone with the know how made an opensource exe or msi that
set all of these settings that are able to be set client-side.
Reply

Replies

Michael Adams Jan 24, 2012 12:38 AM


I wrote a program that you're now requesting a few months back: for the
Microsoft stuff anyway. The Google stuff would have to be developed into
a driver for Windows to work properly.
http://unquietwiki.com/programming/ (look for NetToggle)

Reply

Nathaniel Borenstein Jan 23, 2012 12:34 PM


I'd be interested in hearing if these experiments have been reproduced in
less-privileged connectivity situations, such as 3G, satellite, or even dial-up. As one
who still, of necessity, lives in that environment most of the time, it's my impression
that researchers often operate under the assumption that these constraints are
rapidly going away, but they remain the only options in many rural areas even in the
US today. It's all too easy to imagine that what optimizes the high-bandwidth
experience, such as shortening the initial timeout from 3 seconds to 1, might actually
make things worse for those of us who already have it worst.
Reply

mcr Jan 23, 2012 12:58 PM


What Nathanial said. We need default algorithms in stations which respond to
conditions that they experience, and remember this system wide for a period of
time. (HappyEyeBalls is an example of this, but is browser specific, not system
specific). TCP Fast Open (what happened to TTCP?) likely fails HORRIBLY on
bank's CISCO PIX firewalls.
And it's not just connectivity which is an issue, but also the fact that TCP now runs
on very small devices, where these things are not relevant, and things have to work.
How does TCP Fast Open fail when the responder does not get it?
Reply

John Moehrke Jan 23, 2012 01:03 PM


These are not too new of items. What is needed is a tool that will automatically
discover the best settings given the users current context. As Nathanial indicates not
everyone is on a high-bandwidth link. What this likely is made up of is code that
looks at the current connnections to tweek future connections settings. Generally
each connection starts with some configured default value. Where the default values
are static. I suggest the default settings should be close to a current-consensus.
Reply

David Bond Jan 23, 2012 01:05 PM


Any thoughts on putting payload into the SYNACKs? (server and client both) This
would be the ultimate speed-up...
Reply

the stealth master Jan 23, 2012 01:26 PM


The buffer bloat issue needs to be resolved first, otherwise you will be wasting a lot
of time.

2 of 5 1/24/2012 8:46 AM
Let's make TCP faster - The official Google Code blog http://googlecode.blogspot.com/2012/01/lets-make-tcp-faster.html

http://en.wikipedia.org/wiki/Bufferbloat
Reply

harjuo Jan 23, 2012 01:31 PM


David, either participant still could not pass the data to the socket client before the
handshake was complete. They would have to buffer the data, and that would just
amplify the severity of syn flood attacks.
Reply

Haapi Jan 23, 2012 01:33 PM


I can agree that those initiatives are positive things, but ISP's buffer bloat, which is
defeating the TCP's existing congestion algorithms and introducing latency, make
them moot. One may consider buffer bloat orthogonal to these issues, but ISPs do it
for a reason. Hearing these initiatives phrased in a manner so that ISPs see
solutions to their problem (that is poorly solved by buffering) would be useful.
Reply

Tinctorius Jan 23, 2012 02:15 PM


Have you looked at UDT? There must be some lessons that can be taken from it...
Reply

ycheng Jan 23, 2012 02:21 PM


Thanks for the questions and comments.
@Nathaniel/mcr: compatibility is the key part of Fast Open design. Our draft has
more details to deal with firewalls/syn-data drops: http://www.ietf.org/id/draft-
cheng-tcpm-fastopen-02.txt
@John: tuning the initial parameters based on history is certainly helpful. we are
working on this now.
@DavidBond: Fast Open allows data in SYN-ACK packet as well.
@harjuo: Both our paper and draft discuss a new socket interface and the syn-flood
issue extensively:
http://www.ietf.org/id/draft-cheng-tcpm-fastopen-02.txt
http://research.google.com/pubs/pub37517.html
@stealth/haapl: we are experimenting new algorithms to lower the queuing delay of
TCP connections. Please stay tuned for more updates.
Reply

layer3switch Jan 23, 2012 03:24 PM


Please allow my ignorance, but how will aes block cypher help mitigating man in the
middle tfo cookie interception? wouldn't stream cypher make better sense with
short-time live cookie exchange after 2nd phase?
Reply

Ryan Bonnell Jan 23, 2012 04:44 PM


Interesting article on Bufferbloat. There has been a trend towards Layer3 switches
with reduced buffers so I am surprised that this issue has been getting worse. I
hope we can use these enhancements on load balancers quickly once the tech is
finalized. Using generic Linux servers as load balancers works up to a point. :-)
Reply

Ilya Jan 23, 2012 06:20 PM


Are there instructions or a patch I can apply to my Ubuntu and my servers to adjust
these settings? If so, I can apply it to my servers.
Reply

nm Jan 23, 2012 07:46 PM


This comment has been removed by the author.
Reply

nm Jan 23, 2012 07:50 PM


Are there kernel patches available for the items listed above?

3 of 5 1/24/2012 8:46 AM
Let's make TCP faster - The official Google Code blog http://googlecode.blogspot.com/2012/01/lets-make-tcp-faster.html

I've been able to find/backport for #1 and #4 but have not seen any code anywhere
for #2 or #3.
#2 could be a simple one-liner change but IIRC that leads to other issues.
https://github.com/vrv/linux-microsecondrto
http://www.pdl.cmu.edu/PDL-FTP/Storage/sigcomm147-vasudevan.pdf
For #3, code was promised but I have not seen it publicly posted anywhere..
Reply

Ben Jan 23, 2012 08:06 PM


Do you have any statistics yet for what the combined effect of implementing all of
these changes might be for the "average" internet user?
You would probably get a lot more attention and be more likely to see your changes
adopted in the TCP standard if you published a post that said "Google Researchers
Discover Way To Speed Up Internet By 30%".
Such a headline would lose some specificity, but since most people don't know what
TCP is, it would be an effective way to spread the news and promote change.
Reply

Olivier Bonaventure Jan 24, 2012 02:47 AM


Another work on improving TCP is the development of Multipath TCP that is being
finalized within the IETF. See http://www.ietf.org/id/draft-ietf-mptcp-multiaddressed-
05.txt for the latest draft. The Linux kernel patch developed at UCLouvain is
completely functional and provides good performance on servers and in lab
environments. We'd love to be able to perform more tests in real wireless networks
where Multipath TCP would provide many benefits as well. See
http://mptcp.info.ucl.ac.be/
Reply

Joe Bowman Jan 24, 2012 07:02 AM


Is it really time to attack TCP? Especially since the way the article is worded it
seems like HTTP is a driving factor for establishing results.
The reason I ask is maybe focus should continue to be on ideas like SPDY, which
sort of turns HTTP into a reusable persistent connection type protocol instead of the
current stateless beast it is now. There's also still the idea of turning the entire web
to ssl that's been floated about. That suddenly makes securing sessions easier,
especially when combined with SPDY if developed with those technologies in place.
Then, once you have the more optimized and secure higher level protocols in place,
then tune TCP for them. What's the risk of decisions made now.
Also, item 2 on the timeout. I think you might be generalizing a bit. I personally work
for an organization where we literally have to support that office on the island with a
satellite uplink that only works for 12 hours a day. It has high latency, low bandwidth
and I bet if you cut timeouts by 2/3 their experience will be degraded. Sure, the
current way TCP is designed it probably isn't optimial for 99.9% of the internet, but
that doesn't mean the world is ready to lose that .1%.
Reply

James Jan 24, 2012 07:47 AM


What about offloading the transmission control to software? I've used software that
does this and I've seen 100% increases in throughput.
Reply

Iljitsch van Beijnum Jan 24, 2012 08:28 AM


Did you guys look at losses in the last 3 packets that can't be recovered from using
SACK? IMO that is a big reason why HTTP sessions hang.
Reply

4 of 5 1/24/2012 8:46 AM
Let's make TCP faster - The official Google Code blog http://googlecode.blogspot.com/2012/01/lets-make-tcp-faster.html

Comment as:

Home Older Post

Subscribe to: Post Comments (Atom)

2011 Google inc.. Powered by Blogger.

Terms

5 of 5 1/24/2012 8:46 AM

You might also like