The TCP Tortoise: Optimizations for Emerging Markets


Serving fast pages is a core aspiration at LinkedIn. As part of this initiative, we continuously experiment and study the various layers of our stack and identify optimizations to ensure that we use the most optimal protocols and configurations at every layer.

As LinkedIn migrated to serving its pages on HTTP/2 earlier this year, it became imperative that we identify and use the most optimal transport layer strategy for our users’ network. Being a Transmission Control Protocol-centric (TCP) infrastructure, we initiated an effort to study the effects of different TCP congestion control strategies in different geographies. We found that with the right strategy, we could significantly improve our site speed, resulting in up to 7% faster content downloads.

Why is this important?

LinkedIn adopts a popular strategy of using different delivery methods to serve static versus dynamic content to serve content to our users. We do this by serving cached static content, like fonts and images, from third party Content Delivery Networks, or CDNs, while all of our dynamic content is served through LinkedIn’s own Points of Presence, or PoPs. As is the case in today’s internet, the base page’s HTML is the first resource that a user’s browser needs to serve a web page. This resource must be received by the client before the client can start to request other content, so it is imperative that we serve the base page as quickly as possible from our PoPs, thereby speeding up the entire page load. To do this on HTTP/2, there were a few aspects of our network configuration that we looked into.

TCP congestion control
At a high level, TCP congestion control follows a conservative approach to avoid congestion in the network. A congestion window is maintained by the sender for each connection. The congestion window helps determine the number of packets that could be outstanding at any given time, thereby limiting the rate at which the link’s capacity is exhausted.

When a new connection is set up, the congestion window is initialized to a predetermined number, usually a multiple of the Maximum Segment Size (MSS). This value is scaled based on a conservative “additive increase, multiplicative decrease” strategy. The congestion window increases by a constant value for every successful round trip (measured by Round Trip Time, or RTT), until either a threshold value is reached or a timeout occurs. If the threshold is reached, the congestion window increases linearly thereafter.

The sender maintains a timer to ensure that acknowledgements for the sent packets don’t take too long. A timeout occurs when this timer expires, indicating packet loss and therefore congestion in the network. When this happens, a few steps are taken to adjust the congestion window and the threshold, following which a “slow start” is engaged. Once congestion is relieved, the window size is cautiously ramped up again.

Clearly, from the sender’s perspective, the value of the congestion window determines how much data is transmitted in each round trip, thus determining the throughput of the connection. When a network is characterized by longer delays in reaching its destination, or when there are stray packet losses, they could easily be mistaken for signs of congestion and drastically limit the congestion window. These are commonly referred to as the “High-Bandwidth” and “Lossy-Link” problems, which elucidate the intolerance of the default strategy towards small losses that may not necessarily be caused by actual congestion in the network.

Thus, the choice of an optimal TCP congestion control strategy becomes critical to prevent fallacious deceleration of our site speed. This becomes exigent when coupled with HTTP/2 because it reuses a single TCP connection per origin.

HTTP/2
HTTP/2 sessions each establish a single TCP connection with multiplex streams over the same TCP connection. Though this strategy saves network round trips in setting up TCP connections, multiplexing too many streams on a single connection could easily lead to overuse of bandwidth, which could be misconstrued as congestion at the TCP layer. This could have even deeper implications in emerging markets, where the already suboptimal network conditions (e.g., longer round trips and higher bandwidth-delay product) could quickly compound to slowing down connections.

TCP versus TCP

Over the years, there have been numerous TCP congestion control strategies that have been proposed to solve different problems posed by TCP. From our initial round of experiments with 11 TCP congestion control algorithms, we picked out the best performing three strategies that had different approaches to solving the congestion control problem. We compared these to the default algorithm on our infrastructure, HTCP.

The table below provides a feature highlight for the algorithms we compared.



Source link