In this post, we will touch on why we decided to rebuild the backend system that supports this feature, its technological background, how we re-architected it, and the benefits and successes achieved thus far.
Motivations for rebuilding the system
The initial profile highlights system was built five years ago to support a relatively smaller number of highlights between two members. Started as a proof of concept, the system was built using a monolithic architecture, where all types of highlights were developed by a single team with limited focus on extensibility and modularity. It worked well for a while. As the company grew to focus on building an active community, new challenges surfaced:
These challenges were not well addressed by the old system. A new highlight developer would have to understand a significant part of the service in order to add a new highlight type. Numerous changes would have to be made to different parts of the system, from the entry point of the service all the way to the code where downstream service calls are made. This includes but is not limited to adding a method to an interface, adding an implementation for that, wiring in the new implementation, and creating a LiX control (LinkedIn A/B testing infrastructure). All of these applied to writing testing code as well. Consequently, the existing system was more prone to bugs and iteration speed was impacted.
The new architecture
Before we dive into the new architecture, we’d like to briefly introduce the relevant technologies.
LinkedIn employs the microservices architecture to deliver most of the member experiences. A page view can fan out to a large number of downstream service calls to fetch information, such as profile data, connection information, profile highlights, endorsements, etc., from different services. Each of these calls can further fan out to even more downstream services. For example, the profile highlights service invokes other services to get profile data, shared connections, jobs, etc.
Rest.li is an open source framework for building RESTful services.
ParSeq is an open source library to write asynchronous code in Java. Key features provided by ParSeq include parallelization of asynchronous calls and composition of asynchronous calls.
LoadingCache is a cache implementation provided by Google Guava. It combines caching with the ability to load values when the cache misses. It also handles concurrent cache misses when the loading operation is ongoing.
The solution we adopted to address the aforementioned challenges was to re-architect the system into a platform using a plug-in architecture, where each type of highlight can be implemented as a plugin that can then be registered and integrated easily and independently, as depicted in the architectural diagram.