Building a Label-Based Enforcement Pipeline for Trust & Safety | by Pinterest Engineering | Pinterest Engineering Blog | May, 2021

Sharon Xie| Software Engineer, Trust & Safety

As Pinterest grows with increasing numbers of users and businesses, providing a safe and trustworthy experience is one of our top priorities. Every day the platform serves billions of Pins and boards to inspire Pinners. With so many Pins and activity, It can be a challenge to provide timely and consistent decisions for content safety while unlocking high quality content distribution. In this blog post, we will offer a technical deep dive into the problems we faced as well as present how we built a label-based enforcement pipeline to tackle those problems and fight abuse at scale.

Infra cost of signal product grows

In the early stage of Pinterest, blocklists were the most widely adopted signal storage for content filters (containing IDs of content to be blocked from product surfaces). It is deployed on every serving host with Apache Zookeeper as its synchronization service. All items in the blocklist would be loaded into memory. Whenever there was an update, the entirety of the blocklist would be pushed to every host, consuming bandwidth. With the amount of content growing, the size of blocklists had increased into a considerably large scale. The significant growth in size not only put increasingly intense memory pressure on product hosts, but also incurred a growing infrastructure cost every year. As these problems mounted, we realized blocklists were no longer sufficient to support Pinterest’s growth.

Enforcement discrepancies across systems

As Pinterest’s engineering team grew, teams often built systems in isolation that didn’t communicate with each other. One tricky problem caused by this was that our review agents could not get a complete view of enforcement to properly process a user appeal request, if the Pin had been removed by different automated systems.

Signal enforcement can be complicated

Determining how to synthesize multiple signals into a policy enforcement can be complicated, and signal quality from different types of systems can vary. We must also figure out how to minimize false positives from our automated systems so businesses and creators on Pinterest are not adversely impacted. Additionally, different surfaces have different requirements for signal quality. For instance, recommendation surfaces generally require signals with lower false positives, while notification surfaces require signals with lower false negatives. We must figure out how to collect the most relevant signals on each surface so that Pinterest users can have the optimal product experience.

In order to make Trust & Safety policy enforcement timely, consistent, and flexible, we built a label-based enforcement pipeline. This pipeline consumes labels from a variety of product and policy stakeholders and renders the most relevant label to use for actioning content, such as filtering depending on the product surface (e.g. home feed or search). In this section, we will present the building blocks of the end-to-end enforcement pipeline.


A label is a Thrift struct that captures the source, time, suggested enforcement, and reason:

  • The source denotes “who” created the label. It includes information about the source system, source type (automated, human), and name.
  • The suggested enforcement indicates “what” is the visibility state of the entity being labeled. Some content, like pornography, is never allowed on Pinterest and we remove it. Other types of content, like a selfie, might not be policy violating, but we may want to limit its distribution if it’s not inspirational to Pinners.
  • The time denotes “when” the enforcement is produced by the given source. It is also a critical signal for label freshness.
  • The reason denotes “why” the enforcement is given in terms of which policy was violated, e.g. spam, porn, or misinformation.

A label can be associated with any entity, such as Pin, board, user, etc. Figure 1 shows an example of labels with a Pin entity. In the example, there are two labels associated with Pin with ID 1233211212. One label is from a human review system, which suggested blocking the Pin distribution from the platform because it violates the pornography section of our Community Guidelines. The other label is from an automated system, which suggested limiting the Pin distribution from the platform because it violates the spam section of our Community Guidelines.

Figure 1. Labels with Pin

Label Store as the source of truth

Figure 2. Label-Based Enforcement Pipeline

The label-based enforcement pipeline involves the whole process from label creation to label serving. At Pinterest, labels for Trust & Safety enforcement could come from a batch updated periodically or from a streaming source, shown in Figure 2. The set of batch labels includes all labels generated from batch jobs running in Pinterest big data platforms, such as ML model inferences, critical business data joins, etc. These labels typically come once per day and their serving SLA is 24 hours. The streaming labels, on the other hand, capture labels generated by integrity systems or streaming jobs, such as review tools, administrative apps, realtime abuse detection systems, and realtime ML model inference. These labels typically could be served within five minutes after their creation. To overcome the challenges associated with distributed labels from different systems that we discussed earlier, we built a unified Label Store to host all labels, regardless of their source. Building a centralized Label Store has two primary advantages:

  • It overcomes the storage limitation of a blocklist and could be horizontally scaled when data volume or serving request increases.
  • It allows us to centralize the management of the label life cycle, providing a single source of truth for labels. This makes it much easier to support reliable auditing on operational updates and consistent label serving.

We adopted Pinterest’s in-house key-value data solutions built with Rocksplicator for data storage. Figure 3 shows our data model. The entity, such as Pin, board, or user, is modeled as the primary key. All labels for a given entity are aggregated together for fast queries. The label source is modeled as a feature key, where atomic operations such as addition, deletion, and update are supported.

Figure 3. Label Store Data Model

All label operations are run in a Label Operation Task. This task is managed by a job service, ak PinLater, which takes care of injection rate, task concurrency, and client side retries. It ensures that labels could be properly added to or removed from the Label Store. The task also works with a Label Checker to perform label sanitization and do a preliminary check. The Label Checker serves as a critical role in the enforcement pipeline for two aspects: it ensures no malformed labels or duplicated labels are created in the database, and it uses heuristics to prevent important business partners from being impacted by false positives. For example, if an automated system placed a negative label on a trusted creator, the Label Checker would detect this case and trigger an agent review on content from this partner. No enforcement action would be taken until the review was finished.

The Label Store also provides an API to return all labels for a given entity. This API works with a Label Selector to group labels for different applications. The Label Selector can be easily configured to filter out labels that match a source query. Figure 4 shows an example of a config that returns all human labels and all automated labels from system2 for entities.

Figure 4. Example of Source Config

The Label Conflict Resolver is a component that provides ranking functionality for selected labels. We apply the ranking function to render a final label to serve in production. The equation below illustrates this. Both label source reputation and label freshness are weighted to build a resolver function.

Figure 5. Example Ranking Equation

In practice, the Label Conflict Resolver, together with Label Selector, has empowered the Trust & Safety team to operate on different Pinterest product surfaces with the most relevant enforcement. This is because the source selector allows us to collect labels that meet filtering SLA for a particular surface, and the label resolver ensures the most appropriate heuristics are being applied for entity enforcement. The final enforcement for entities is served via an API. All Pinterest public surfaces can integrate with the API to block bad content while our internal tools can leverage the API to understand content serving status and take further action against harmful content.

After launching the label enforcement pipeline and integrating it with our most critical product surfaces like home feed and search, we were able to leverage thousands of high quality labels from our agent review tools as well as our ML platforms. We managed to gain scalability improvements. We could more efficiently enforce against bad content, even though we moved checks from in-memory to a service call. There is no noticeable drop in performance, such as latency and success rate. Because of the thoughtful design, we were also able to minimize the impact of false positives so that they didn’t adversely impact Pinners or partners. We’ve seen significant positive metrics for user engagement when testing Label Store behind an experiment, and this impact will continue to grow as more and more surfaces integrate with Label Store.

By centralizing labels for enforcement, we managed to provide enforcement consistency as well as transparency for all clients. Before, finding out why a Pin was taken down meant searching through multiple systems and tables to find the result. Now, there is a single search hub to outline the history of label enforcement change events. Ultimately, Label Store has become one of the most important tools in keeping Pinterest safe for our users and advertisers.

Huge thanks to Revant Kapoor, Nilesh Gohel, Xinyuan Chen, Jiabin Wang, Abhishek Jathan, Vladimir Mikhaylovskiy, Jared Wong, Farran Wang, Alok Singhal, Maisy Samuelson, and the rest of the Trust & Safety team for helping to design and build the enforcement pipeline and tooling! Thanks Harry Shamansky for help with this blog post!

Source link