Building a more intuitive and streamlined search experience


Co-authors: Rashmi Jain and Sonali Bhadra

Search on LinkedIn is an essential part of our members’ experience, with more than 44% of members who visit the site each week performing at least one search. We’re continually working to improve that experience and recently introduced a new streamlined and intuitive search experience that brings together people, jobs, groups, companies, courses, and more in a more discoverable manner. Since ramping, we’ve seen strong, tangible signs that the overhaul is helping members connect to more communities, opportunities, and resources. Meanwhile, on the backend, we’ve removed major roadblocks for our engineers, reducing the time it takes to build and ramp a new search entity type from 21 weeks to 1 week.   

In this blog post, we’ll detail our journey of rebuilding search, explain how we prioritized both the member and engineer experience, and share what we learned along the way. Our challenge as an engineering team was how to rethink our search infrastructure in a way that would enable our members to better connect with the opportunities, resources, and communities on LinkedIn, while also empowering our team internally by removing the roadblocks that sprung up as we grew to nearly 740 million members. 

Setting the scene

During our last search rewrite in 2015, we prioritized simplification and development speed through continuous integration and deployment when it came to our tech stack. Since then, we’ve seen strong member adoption within search on a number of fronts. However, we started to run into challenges, such as inconsistent and non-intuitive experiences as well as high turnaround time for adding new searchable entities. Onboarding a new use case to search was a cumbersome process across the stack and required entity owners to understand search’s domain-specific architectures and integrate their data into multiple layers of the search ecosystem manually, resulting in huge bottlenecks and iterations. 

In addition, our frontend architecture for search had several limitations when it came to scaling new features and experimentation velocity. This was due to a lack of standardization in the code architecture and duplicate efforts across clients (mobile, desktop) causing significant increase of app size. These architectural problems varied in nature and intensity between different platforms. 

As we looked ahead, we needed next-generation frontend technology that could scale to serve search’s growing product needs, built on the following guiding principles: architectural consistency; separation of concerns; efficiency; and developer productivity.

We also knew we needed to become more nimble, especially in regards to the development of new search entities and experiences. It previously took around 21 weeks to onboard a new entity for search. That was unacceptable, and we set our goal of an onboarding time of < 1 week for new search entities. 

Our approach 

We set clearly defined design principles focused on the member experience and product guidelines to keep the team focused. However, for the purpose of this blog, we’ll focus on the developmental categories we kept in mind as we built a new search. This helped us ensure high craftsmanship, scalability of engineering design, debuggability of code, and understanding of metrics. 

  • Commitment to craftsmanship, including a consistent architecture across platforms 

  • Testable and debuggable platform, including a unit-to-scenario test coverage ratio of 70%

  • An accessible and localized experience

  • Secured and trusted, with built-in security for each layer

  • A fast experience that didn’t sacrifice speed 

The LinkedIn search ecosystem

It’s also important to grasp the complexity of the LinkedIn search ecosystem, which consists of multiple layers, each tasked with a specific job as described below. To enable faster onboarding of new entities to the search ecosystem, a platformization effort was planned for each layer. We’ll focus on the top layer in this blog post, namely the frontend clients and the frontend API, in addition to our improvements in tracking coverage, which enables the machine learning pipelines to have robust data for experimentation.

  • Flagship Search frontend clients and API: Web, iOS, and Android clients interacting with members and displaying search results, and the API responsible for routing requests to the correct backends and stitching together the responses for decoration and presentation of search results and suggestions.

  • Search midtier: Includes a federator responsible for scatter-gather of different result types and a machine learning pipeline responsible for ranking and blending the result types.

  • Search backend: Includes offline and online indexes that power the search engine and machine learning pipelines responsible for maximizing successful searches based on intent.



Source link