InSearch: LinkedIn’s new message search platform


High-level Searcher flow with RocksDB key-value structure

When a member performs his or her first message search, we run a prefix scan of keys (prefix being the ID of the member) from RocksDB. This gives us all the documents for the member that are then used to construct the index. After the first search, the index is cached in-memory. The results? We observed a cache hit ratio of around 90%, and a resulting overall 99th percentile latency of the order of 150ms.

For writes, we insert encrypted data into RocksDB. If the index is cached, then the cached index is also updated by reading the updated document from the DB again. We also persist cached member IDs, so that their indexes can be reloaded into cache on startup. This keeps the cache warm—even after deployments.

Partitioning, replication, and backups

As with most distributed systems, replication and partitioning are how we deal with scalability and availability. The data is partitioned by a combination of member ID and document ID. This allows data for one member to be distributed across multiple partitions, and helps us scale horizontally for members with large inboxes since the index creation load can be shared by multiple partitions.

We have three live replicas for each searcher partition and one backup replica. Each replica independently consumes indexing events from Kafka streams. We have monitoring in place to ensure that none of the replicas develop more lag compared to its peers. The backup replica periodically uploads a snapshot of the database into our internal HDFS cluster. We also backup the Kafka offsets along with backups. These offsets are used to ensure that we are fully caught up with missing data from Kafka before the service starts up from a backup data set.

Ingestion

The source of truth for messaging data is Espresso tables. We consume updates from these tables using Brooklin streams into a Samza job, which then transforms these changelogs into the format required by the searchers for indexing. The stream processing job joins this stream with other datasets to help decorate the IDs with actual data to be used (e.g., decorating member IDs with their names). It also takes care of partitioning the data required by the searchers. Now, each searcher host only has to consume data from the specific Kafka partitions that it hosts.

Broker

The Broker service is the entry-point for search queries. It is responsible for:

  • Query Rewriting: It rewrites the raw query (example: “apple banana”) into InSearch format (example: TITLE:(apple AND banana) OR BODY:(apple and banana)) based on the query use-case. Different search queries may prioritize certain fields to search on with varying scoring parameters. 
  • Scatter gather operation: It fans out requests to Searcher hosts, collating the results from searchers.
  • Retries: Brokers will retry requests on a different searcher replica in case of retriable failures or if a particular searcher is taking too much time
  • Re-ranking: The results from all searcher hosts are re-ranked for a final result set, and pruned based on pagination parameters.

Brokers use our internal D2 zookeeper service (which maintains the list of searcher hosts for each partition) to discover the searcher hosts, for each partition, in order to select hosts for fanout. We also ensure sticky routing on these hosts such that requests for a given member go to the same searcher replica, so that the index does not get rebuilt on multiple replicas and we deliver a consistent search experience.

Conclusion

As of today, all message search requests from the flagship LinkedIn app are now served by InSearch, and we are able to serve search requests with a 99th percentile latency of under 150 ms. 

We are currently in the process of migrating several of our enterprise use cases into the new system, and evaluating other applications. Additionally, we are now starting to leverage the new messaging search system to accelerate improvements to the LinkedIn messaging experience. 

Acknowledgements

The engineering team behind InSearch consists of Shubham Gupta, who envisioned and championed the project in the early days as well as played a key role in its design and development, Ali Mohamed, Jon Hannah, and us. InSearch was also an amazing team effort across organizations. We would like to thank our key partners from the messaging team—Pradhan Cadabam, Pengyu Li, Manny Lavery, and Rashmi Menon. This project would not have been possible without the strong support from our engineering leaders: Parin Shah, Michael Chernyak, Josh Walker, Nash Raghavan, and Chris Pruett.



Source link