LinkedIn Lite: A Lightweight Mobile Web Experience

The opportunity in India

India is a mobile-first country, with 71% of the population having only a mobile internet connection and accessing the internet only via mobile.  Also, 85% of the mobile population accesses the internet on an Android device, with UCWeb being the preferred browser of choice followed by Chrome and Opera respectively.

LinkedIn India has more than 47 million members and is one of LinkedIn’s biggest markets, second only to the US. Of the thousands of new members signing-up for LinkedIn in India every day, a staggering 55% do so via mobile.

For this rapidly-growing, mobile-first market, we introduced the LinkedIn Lite web experience in 2016. Later, in 2017, we expanded the LinkedIn Lite experience to members in 60+ countries, providing a mobile web experience that was sometimes four times faster than the regular site. In this blog post, we discuss the process of identifying the market fit for LinkedIn Lite, analyzing our site performance for India’s mobile-first population, and the performance optimizations we added to the LinkedIn Lite app.

The case for a light-weight mobile web experience

In India, 85% of LinkedIn members using a mobile device use an Android phone, with roughly 75% of them accessing LinkedIn using the Chrome browser. In contrast, other popular sites in the country, such as e-commerce giants Amazon and Flipkart, might have a different traffic distribution across browsers, with even a majority of users accessing the site from UCWeb, a browser that employs server-side compression to reduce page load times. In short, we needed to have a mobile experience in India that was on par (or better) than the experiences our members were getting on other sites. Added to that was the fact that more and more Indians were connecting to LinkedIn every day, on networks all across the country—we were worried that even minor app performance issues could be compounded by slowdowns elsewhere in the network.

The older mobile web experience was built several years ago on a Node.js stack using jQuery and other libraries, and it wasn’t performant, with page load times at 90th percentile often not within LinkedIn’s site speed performance goals. At the time, we believed that the primary reason for this discrepancy might have been the site’s performance in countries with low bandwidth.

Performance analysis
Around April 2016, we did a deep-dive analysis of the performance bottlenecks of the LinkedIn mobile frontend and deduced the following:

  • The DNS lookup times, if the browser/OS hadn’t cached the DNS, could be as long as a second, but this was quite rare unless the network was flaky and the reception was poor. The number of non-zero DNS lookup times was quite low overall.

  • The connect times in India varied a lot, and at times took as long as 2 seconds!

  • The old LinkedIn mobile stack had many redirects into different systems to determine the appropriate experience for the member, since there were a couple of independent stacks serving during that time. Depending on several factors, including SSL, there were roughly 2 to 4 redirects at times. On a slow connection (lower 3G speeds or during network congestion), the time spent in redirects was about 3.7 seconds on an average. The time to first byte (TTFB) was about 5.6 seconds.

Effect of payload and large JS bundles on performance
The previous generation mobile web stack was shipping over 500 KiB of JavaScript to boot its client-side rendered application. With such payloads, the problems are two-fold:

  1. The payload size is directly proportional to the time to transfer the content. Without boring you with too many details, the rough amount of time it takes to transmit payloads grows exponentially, not accounting for packet loss. It takes a maximum of 1.2 seconds for every round-trip time (RTT) in India on most cellular networks. Assuming no packet loss, 13 KiB is transferred in 1 RTT, 39 KiB takes 2 RTTs, up to 91 KiB can be transmitted in the third RTT, and so on. So it’s no surprise that the lesser the payload, the faster the transfer times.

  2. It has been established time and again that large JavaScript bundles, especially the ones that are required to initialize the app, suffer with parse and compile times. We also concluded with our research that JavaScript parse/compile times are typically much longer on Android phones compared to iOS phones due to differences in processor architecture and performance. Since JavaScript is parsed/compiled on a single core, it takes longer to bootstrap larger bundles on Android phones, which are the most prevalent in India.

Project Bolt, a simple beginning

As the product team established a goal of a page load time of 6 seconds on a 2G/low 3G (100-150 Kbps approximately) network, the engineering team established the following tenets.

Server-side rendering (SSR)
Rendering the first view on the server gives a good perceived performance for end users and in turn improves overall member experience. SSR is also faster to render in cold cache/first visit situations, which is a good majority of the use-cases when:

  1. Members visit the site for the first time or if the browser has expired out the asset cache—the visit can also be from an email client or a push channel.

  2. The site has been updated recently since the member’s last visit, especially given the fact that we deploy almost daily at LinkedIn.

Smaller payload
A smaller payload means fewer roundtrips, and hence a faster transfer time.

By capping the payload to less than 90 KiB (we set an internal limit of 75), the payload can be transferred in roughly 3 RTTs. Assuming a slow network or a network with congestion, where 1 RTT is over a second or more, the total transfer would be under 4 seconds, giving enough head room. Apart from the faster transfer speed, having a smaller payload also helps with parsing times, especially for JavaScript.

Less JavaScript
Shipping less JavaScript to the browser directly translates to faster parse/compile times, and hence, a faster, snappier UI.

There was a detailed discussion and quite a lot of research around this, which is captured succinctly in this thread. While the thread is on Ember, it’s true for almost all use-cases and applies to all of the popular frameworks that are in use today.

The high-level overview is that parsing times for JavaScript are generally faster in high-powered iOS chips compared to multi-core lower-powered Android chips. The latter are the most prevalent in emerging markets like India.

This hypothesis was proved by Addy Osmani from Google in a recently published article on the cost of JavaScript.

No redirects
As called out earlier, redirects are a huge performance penalty in general, and even more so in emerging markets, where bandwidth is low and network congestion is high. By removing redirects, we can save up to 3.5 seconds as compared to the legacy mobile web site.

Leverage the (modern) browser
Based on our research, we found out that about 70% of members were using Chrome on an Android device, which means they’re automatically updated most of the time to run the latest browser that has all the latest goodness like Promises, the Fetch API, etc. That means there’s no need to ship polyfills for 70% of the population, making our JavaScript bundle smaller, directly translating to faster parse times and a better-performing website.

LinkedIn Lite architecture

Source link