Five years ago, Medium was built using the latest tools and frameworks by people who had experience with those tools. It’s time that we update these tools and frameworks.
However, migrating an entire system to new tools and frameworks isn’t an easy task. And doing that while not impacting feature development? That’s even harder.
So, how would you migrate off of your existing system, without hindering feature development, but also incrementally gain the benefits of the new system along the way?
We began by testing out different frameworks and technologies and figuring out which ones deliver on the dimensions we care about.
- Improve the developer experience, making it faster and more intuitive
- Improve the performance of Medium
With these goals in mind we asked ourselves the following questions when assessing tools and frameworks:
- Does the tool/framework have an active and responsive community?
- Is the tool/framework relatively mature (e.g. used in production elsewhere)?
- Is the tool/framework intuitive and easy to use?
- Does incorporation of the tool/framework have the potential to increase performance?
- Does the tool/framework give us a easy migration path?
- Are the engineers who will be using the tool/framework excited about it?
Next, we worked out our order of operations. How can people start using this new system as soon as possible — and certainly before it’s complete — but also how can we avoid infringing upon development of new products?
We decided on a design with two parts:
First, we migrate a subset of pages in our old web client to React.js, with our old client still in place and functional. We use GraphQL as the interface between this new client and our old API service.
We are able to display pages on both the new and the old system because we direct traffic at the proxy layer depending on the route being hit (e.g. if you navigate to the profile page, you are seeing our new system, but the post page uses our old system). You can see examples of this in the wild right now, including the user profile page and Series for web!
Our old API connects to different databases and contains a lot of business logic. That would be a lot to migrate in one shot. By using our old API as our data source, we avoid needing an immediate major server-side rewrite and are able to incrementally migrate our client.
This means we are able to migrate client-side code to the new system without negatively affecting product development. It also gives product engineers the flexibility to begin working with our new tools sooner and be able to provide value as soon as possible.
We also use the data description — defined in protobufs — from our legacy API as a schema for interfacing with GraphQL. In this way, we are able to be strict about the data we let through our system, which makes it easy to know what data is available, what type it is, and whether it will be present. It also means we set ourselves up perfectly for a future where we use gRPC.
The next phase is to start chipping parts of our server-side code into services. In doing this, we can start reaping more of the benefits of using GraphQL since our services will be simpler, more modular, and more performant. Because all of the GraphQL infrastructure is already in place, we can easily have GraphQL talk to new services via gRPC, without worrying about supporting the old API (since these new services will be completely separate).
We’ll be able to use the new services in conjunction with our old API until each piece is separated into its respective service.
Again, in doing this, we don’t affect product work while we migrate our systems over, as the old systems will still be in place. Also, the backend changes are transparent to users of the new GraphQL API. This makes the transition almost seamless.
Once we’ve transitioned the old API into new services that use gRPC, we’ll be able to retire our legacy API completely.
We’ve completed most of the migration for Part 1, and things are looking great! We’re getting started on Part 2 soon, and are looking forward to…
~ the future ~