The opportunity in India
India is a mobile-first country, with 71% of the population having only a mobile internet connection and accessing the internet only via mobile. Also, 85% of the mobile population accesses the internet on an Android device, with UCWeb being the preferred browser of choice followed by Chrome and Opera respectively.
LinkedIn India has more than 47 million members and is one of LinkedIn’s biggest markets, second only to the US. Of the thousands of new members signing-up for LinkedIn in India every day, a staggering 55% do so via mobile.
For this rapidly-growing, mobile-first market, we introduced the LinkedIn Lite web experience in 2016. Later, in 2017, we expanded the LinkedIn Lite experience to members in 60+ countries, providing a mobile web experience that was sometimes four times faster than the regular site. In this blog post, we discuss the process of identifying the market fit for LinkedIn Lite, analyzing our site performance for India’s mobile-first population, and the performance optimizations we added to the LinkedIn Lite app.
The case for a light-weight mobile web experience
In India, 85% of LinkedIn members using a mobile device use an Android phone, with roughly 75% of them accessing LinkedIn using the Chrome browser. In contrast, other popular sites in the country, such as e-commerce giants Amazon and Flipkart, might have a different traffic distribution across browsers, with even a majority of users accessing the site from UCWeb, a browser that employs server-side compression to reduce page load times. In short, we needed to have a mobile experience in India that was on par (or better) than the experiences our members were getting on other sites. Added to that was the fact that more and more Indians were connecting to LinkedIn every day, on networks all across the country—we were worried that even minor app performance issues could be compounded by slowdowns elsewhere in the network.
The older LinkedIn.com mobile web experience was built several years ago on a Node.js stack using jQuery and other libraries, and it wasn’t performant, with page load times at 90th percentile often not within LinkedIn’s site speed performance goals. At the time, we believed that the primary reason for this discrepancy might have been the site’s performance in countries with low bandwidth.
Around April 2016, we did a deep-dive analysis of the performance bottlenecks of the LinkedIn mobile frontend and deduced the following:
The DNS lookup times, if the browser/OS hadn’t cached the DNS, could be as long as a second, but this was quite rare unless the network was flaky and the reception was poor. The number of non-zero DNS lookup times was quite low overall.
The connect times in India varied a lot, and at times took as long as 2 seconds!
The old LinkedIn mobile stack had many redirects into different systems to determine the appropriate experience for the member, since there were a couple of independent stacks serving LinkedIn.com during that time. Depending on several factors, including SSL, there were roughly 2 to 4 redirects at times. On a slow connection (lower 3G speeds or during network congestion), the time spent in redirects was about 3.7 seconds on an average. The time to first byte (TTFB) was about 5.6 seconds.
Effect of payload and large JS bundles on performance
The payload size is directly proportional to the time to transfer the content. Without boring you with too many details, the rough amount of time it takes to transmit payloads grows exponentially, not accounting for packet loss. It takes a maximum of 1.2 seconds for every round-trip time (RTT) in India on most cellular networks. Assuming no packet loss, 13 KiB is transferred in 1 RTT, 39 KiB takes 2 RTTs, up to 91 KiB can be transmitted in the third RTT, and so on. So it’s no surprise that the lesser the payload, the faster the transfer times.
Project Bolt, a simple beginning
As the product team established a goal of a page load time of 6 seconds on a 2G/low 3G (100-150 Kbps approximately) network, the engineering team established the following tenets.
Server-side rendering (SSR)
Rendering the first view on the server gives a good perceived performance for end users and in turn improves overall member experience. SSR is also faster to render in cold cache/first visit situations, which is a good majority of the use-cases when:
Members visit the site for the first time or if the browser has expired out the asset cache—the visit can also be from an email client or a push channel.
The site has been updated recently since the member’s last visit, especially given the fact that we deploy almost daily at LinkedIn.
A smaller payload means fewer roundtrips, and hence a faster transfer time.
There was a detailed discussion and quite a lot of research around this, which is captured succinctly in this thread. While the thread is on Ember, it’s true for almost all use-cases and applies to all of the popular frameworks that are in use today.
As called out earlier, redirects are a huge performance penalty in general, and even more so in emerging markets, where bandwidth is low and network congestion is high. By removing redirects, we can save up to 3.5 seconds as compared to the legacy mobile web site.
Leverage the (modern) browser