Building a heterogeneous social network recommendation system

Figure 5: Tight coupling between SPR and Edge-FPR entails multiple back-and-forth updates
a) A marginal improvement to an Edge-FPR model triggers a retraining of SPR as the Edge-FPR score has changed. b) An improvement to SPR also triggers a retraining of each of Edge-FPR models.

We keep Edge-FPR and SPR completely independent to avoid back-and-forth updates that would cause slowness in experiment velocity. Mandating a retraining of SPR every time an Edge-FPR model is updated would slow down the iteration speed for FPR models. We consciously make a decision to prioritize the agility of the engineering teams developing Edge-FPRs and resort to obtaining the benefit from SPR retraining at its own cadence. 

Model training and random bucket
For training our SPR, we maintain a holdout of 5% (random bucket) where we show a complete random shuffling of cohorts. Tracking data from this random bucket is used for training the SPR model. To avoid bias from cohorts ranking in Edge-FPR models, the same data could also be used for training Edge-FPR models by the individual responsible teams.

Impression guarantees
When introducing a new entity, our SPR model could end up undeservedly pushing cohorts of these entities down in the ranking, leading to limited exposure and training data for the teams to build their own FPR models. To avoid this, we provide some kind of stochastic impression guarantee for cohorts of each type. One way of providing a simplistic notion of impression guarantee is to get a read of the importance of the different edge types (connection, follow, subscribe), define a constraint on the number of cohorts that should be provided for that particular category, and use that as a guarantee to begin with. Note that the system still remains flexible in terms of ranking these cohorts in any possible order. For instance, a 1:2 importance for connection vs. follow edge, would entail ensuring twice as many impressions of follow cohorts to connection cohorts, while the ranking among them is dictated by the SPR.

Periodically, we would get fresher reads of this relative importance (based on iterations of the counterfactual experiments or how the SPR system performs in terms of metrics), and accordingly change the cohort impression guarantees on a periodic basis (monthly or quarterly). We specifically choose the guarantee of impressions at a per-viewer and per-edge-type level. This doesn’t provide a global guarantee for each cohort. A global impression guarantee for each cohort—while ideally preferable—would be far too complex to operationalize for what it’s worth.

Results and next steps

To measure the impact of our SPR system, we conducted A/B tests, which showed an increase in the number of engaged members and a significant increase in the downstream-interactions of members. The new system helped more members not only create edges (e.g., connecting to other members, following hashtags, subscribing newsletters), but also have conversations over these newly formed edges.

An effect we see in our heterogeneous social network recommendation system is cannibalization across edge types. Formation of certain edges can come at the cost of other edges; while there might be an overall increase in the number of edges and members interactions, the distribution of this increase over the different edge types depends on the specifics of the SPR algorithm, chosen to appropriately satisfy the product specifications for each edge type. There is also heterogeneity in interactions among member groups. Frequent members continuously provide us with rich data to show high-quality recommendations, while inactivity from infrequent members leads to lack of data and lower-quality recommendations. We plan to address these limitations and continue to invest into our strategy of building more holistic and active communities on LinkedIn that help make all of our members more productive and successful.


It takes a lot of talent and dedication to build the AI products that drive our mission of building active communities on LinkedIn. We would like to thank Aastha Jain, Yan Wang, Ashwin Murthy, Abdul Al-Qawasmeh, Albert Cui, Jugpreet Talwar, Zhiyuan Xu, Bohong Zhao, Judy Wang, Chiachi Lo, Mingjia Liu, David Sung, Qiannan Yin, Quan Wang, Jenny Wu, Andrew Yu, Shaunak Chatterjee, and Hema Raghavan for their instrumental support, and Shaunak Chatterjee, Yiou Xiao, Kinjal Basu, Michael Kehoe, Heyun Jeong, Stephen Lynch, and Jaren Anderson for helping us improve the quality of this post. Finally, we are grateful to our fellow team members from PYMK AI, Growth Eng, Communities AI, Optimus AI, and Growth Data Science teams for the great collaboration.

Source link