How Airbnb is Moving 10x Faster at Scale with GraphQL and Apollo

GraphQL Unions for Backend-Driven UI

The existing state for the product at the beginning of the talk presupposes we have built a system where a very dynamic page is constructed based on a query that will return an array of some set of possible “sections.” These sections are responsive and define the UI completely.

The central file that manages this would be a generated file (later, we will get to how to generate it) that looks something like this:

Since the list of possible sections is quite large (~50 sections today for Search), it also presumes we have a sane mechanism for lazy-loading components with server rendering, which is a topic for another post. Suffice it to say, we do not need to package all possible sections in a massive bundle to account for everything up front.

Each section component defines its own query fragment, colocated with the section’s component code. That looks something like this:

This is the general idea of Backend-Driven UI at Airbnb. It’s used in a number of places, including Search, Trip Planner, Host tools, and various landing pages. We use this as our starting point, and then in the demo show how to (1) make and update to an existing section, and (2) add a new section.

Explore Your Schema With Playground

While building your product, you want to be able to explore your schema, discovering field names and testing out potential queries on live development data. We achieve that today with GraphQL Playground, the work of our friends at Prisma. The tools come standard with Apollo Server.

In our case, backend services are primarily written in Java, and their schemas are stitched together by our Apollo Server, which we call Niobe. For now, since Apollo Gateway and Schema Composition are not yet live, all our backend services are name-spaced by service name. This is why exploring the playground begins with a list of service names. The next level down in the tree is the service method. In this case, getJourney().

View Your Schema in VS Code with the Apollo Plugin

One of the joys I wanted to demonstrate in the talk is having so many helpful tools at my fingertips while building. This includes access to Git in VS Code, as well as the integrated terminal and tasks for running frequently-needed commands.

Of course, we also had some fun stuff to show for GraphQL and Apollo! The part that most people had not seen was the new Apollo GraphQL VS Code Extension. There is no need for me to copy over all the features from their marketing site, but I will elaborate on one feature: Schema Tags.

If you are going to lint your queries against the schema you are working on, you will invariably be presented with the decision of “which schema?” The default may be your production schema (“current,” by convention), but as we discuss in the demo, if you need to iterate and explore ideas, you need the flexibility of moving between schemas.

Since we are using Apollo Engine, publishing multiple schemas using tags allows us this flexibility, and multiple engineers can collaborate on a single proposed schema with nominal effort. Once proposed schema changes for a service are completed upstream and those changes are naturally flowing down in the current production schema, we can flip back to “current” in VS Code. Very cool.

Automatically Generate Types

The goal with Codegen is to benefit from strong type safety without having to manually create TypeScript types or React PropTypes. This is critical because our query fragments are distributed among the components that use them. This is why making a 1-line change to a query fragment results in 6–7 files being updated; because that same fragment appears in numerous places in the query hierarchy — in parallel to the component hierarchy.

This part is nothing but Apollo CLI functionality. We are working on a particularly fancy file watcher (named “Sauron,” obviously), but for now it is no trouble at all to run apollo client:codegen --target=typescript --watch --queries=frontend/luxury-guest/**/*.{ts,tsx} as needed. It is good to be able to flip off the codegen during rebases, and I typically filter my queries down to the project I am working on.

My favorite part is that since we are co-locating our fragments with our components, changing a single file results in many files in the query being updated as we move up the component hierarchy. This means that higher up in the tree near the route component, we can see the consolidated query and all the various types of data it can pass through.

No magic there at all. Just Apollo CLI.

Isolate your UI Changes with Storybook

The tool we use for editing UI is Storybook. It is the perfect place to make sure your work aligns with designs to the pixel across breakpoints. You get fast hot module reloading and a couple checkboxes to enable/disable browser features like Flexbox.

The only tricks I apply to Storybook are loading the stories with the mock data we’ve extracted from the API. If your mock data really covers all the various various possible states for your UI, you are good to go. Beyond that, if you have alternative states you want to account for, perhaps loading or error states, you can add them in manually.

This is the crux of the matter for Storybook. This file is entirely generated from Yeoman (discussed below), and it delivers the examples from the Alps Journey by default. getSectionsFromJourney() just filters the sections.

One other hack you’ll notice is that I added a pair of divs to bookend my component vertically, since Storybook renders with whitespace around the component. That is fine for buttons or UI with borders, but it’s hard to tell precisely where your component starts and ends , so I hacked them in there.

Since we are talking about how all these fabulous tools work so well together to help you be productive, can I just say what a delight it is to work on UI with Zeplin or Figma side by side with Storybook. Digging into UI in this abstract way takes all the chaos of this madcap world away one breakpoint at a time, and in that quiet realm, you are good down to the pixel every time.

Automatically Retrieve Mock Data

To supply Storybook and our unit tests with realistic mock data, we want to extract the mock data directly from our Shared Development Environment. As with codegen, even a small change in a query fragment should also trigger many small changes in mock data. And here, similarly, the hard part is tackled entirely by Apollo CLI, and you can stitch it together with your own code in no time.

The first step is simply to run apollo client:extract frontend/luxury-guest/apollo-manifest.json, and you will have a manifest file with the all the queries from your product code. One thing you may notice is that the command is name-spaced to this “luxury guest” project because I do not want to be refresh all possible mock data for all possible teams.

This command is lovely because my queries are all in TypeScript files, but this command will execute on source and combine all the imports. I don’t have to run it on babel/webpack output.

The piece we then add to this is short and mechanical:

We are currently working with the Apollo team to extract this logic to the Apollo CLI, as well. I could imagine a world where the only thing you need to specify is the array of examples you want, and co-locating them in a folder with a query would automatically codegen the mocks on demand. Imagine key the mock needs off the query name thusly:

If you have some ideas for how you would want to use this, please don’t hesitate to reach out!

Add Visual Diffs to Code Review with Happo

Happo is a straight up life-saver. It is the only visual diffing tool I’ve ever used, so I would not be sophisticated enough to compare it to alternatives, if there are any, but the essential idea is that you push code, and it goes off and renders all the components in your PR, comparing it with the version on master.

This means if you edit a component like <Input /> it will show you the impact on components that use Input, including the Search Bar. It. Is. Fabulous.

How many times did you think your change was contained only to discover that ten other teams started using what you built, and your change breaks three of the ten? Without Happo, you might not know.

Until lately, the only downside with Happo was that our Storybook variations (the input to the visual diffing process) did not always reflect reliable data adequately. Now that Storybook is leveraging API data, we can feel much more confident. Plus, as our demo explores, it is automatic. If you add a field to the query and then the component, Happo will automatically post the diff to your PR, letting the engineer sitting next to you see the visual consequences of the change you have made.

Generate New Files with Yeoman

If you need to scaffold a bunch of files multiple times, you should build a generator. It will turn you into an army of you. I was shocked following the talk by how many people thought we had a flotilla of infrastructure engineers working on what I showed in that video. Other than the AST Transformations (which I will address next), this was just three template files and this puppy:

You could imagine creating something like the above in 2–3 minutes for a project that was only going to last the afternoon. Yeoman generators don’t need to wait for infrastructure teams.

Use AST Explorer to Learn How to Edit Existing Files

In theory, the tricky part for Yeoman generators is to edit existing files. But with Abstract Syntax Tree (AST) transformations, the task is made much easier.

Here is how we achieve the desired transformation of Sections.tsx, which we discuss at the top of this post:

_updateFile is boilerplate for using Babel to apply an AST transformation. The crux of the work is _addToSectionMapping, and you see:

  1. At the Program level, it inserts a new Import Declaration.
  2. Of the two Object Expressions, the one with multiple properties is our Section Mapping, and we will insert a key/value pair there.
  3. The Tagged Template Literal is our gql fragment, and we want to insert 2 lines there, the first being a Member Expression and the second being one of a set of “quasi” expressions. (Hmm…)

If the code performing the transformation looks intimidating, I can only say the same is true for me. Prior to writing this transformation, I had not encountered quasis, and it would be fair to say I found them quasi-confusing (#DadJokes).

The good news is that AST Explorer makes it easy to hammer this sort of thing out. Here’s that same transformation in Explorer. Of the four panes, the upper left contains the source file, the upper right contains the parsed tree, the lower left contains the proposed transformation, and the lower right contains the transformed result.

Looking at the parsed tree immediately tells you the structure of the code you’ve written in Babel terms (you knew that was a Tagged Template Literal, right?), and that gives you what you need to figure out how to apply transformations and test them.

AST transformations also play a crucial role in Codemods. Check out this post from my friend, Joe Lencioni, on the matter.

Extract Mock Content from Zeplin or Figma

Zeplin and Figma are both built to allow engineers to extract content directly to facilitate product development.

In the case of the image above, extracting the copy for an entire paragraph is as simple as selecting the content in Zeplin and clicking the “copy” icon in the Content section of the sidebar.

In the case of Zeplin, images can be extracted by selecting and clicking the “download” icon in the Assets section of the sidebar.

Automate Photo Processing…By Building Media Squirrel?

The photo processing pipeline is most certainly an Airbnb-specific thing. The piece I wanted to highlight was actually Brie’s contribution in creating “Media Squirrel” to wrap an existing API endpoint. Without Media Squirrel, we didn’t have a good way to convert raw images on our machines to JSON objects containing the content from our image processing pipeline, to say nothing for having static URLs we could use as image sources.

The takeaway for Media Squirrel is that when you need something routine that many people will need, never hesitate to build a useful tool everyone can use moving forward. It’s part of Airbnb’s zany culture, and it’s a habit I value deeply.

Intercept Schema and Data in Apollo Server

This part is still a work in progress as concerns the final API. The key things we wanted to do were (a) intercept a remote schema and modify it, and (b) intercept a remote response and modify it. The reason is that while the remote service is the source of truth, we want to be able to iterate on the product before formalizing something like a schema change in the upstream service.

This is the only place in the demo where we got a little cheeky and took liberties with API. With Schema Composition and Distributed Execution in Apollo’s near-term roadmap, we didn’t want to guess how everything would work precisely, so we just presented the basic concept.

In reality, Schema Composition should give us the ability to define a type and do something along the lines of:

Note: In this case, the schema knows that EditorialContent is a union, and so by extending it, we’re really asking it to be aware of another possible type.

The code for modifying the Berzerker response looks like this:

The idea is a play on what you find in Apollo Server Mock API. Here, instead of the mocks filling any gaps in your API, it actually leaves any gaps in place and proactively overrides the content based on what you provide. This is more likely to be the sort of API for mocking that we would want.

In Conclusion

Much more important than any one of these tricks is the broader point about moving exceptionally fast and automating as much of the process as possible, particularly around boilerplate, types, and file creation.

Apollo CLI takes care of all the Apollo-specific domain, which free you up to wire those utilities up in a way that makes sense for your use case.

Some of those use cases, like codegen for types, are universal and end up part of your overall infrastructure. But many of them are just as disposable as the components you use them to build. And none of them involved a product engineer waiting for an infrastructure engineer to build something for them!

So I hope this post explains what you’re seeing in the video, and I hope you have the chance to apply some of these techniques in your day to day work.

Source link