Engineering More Reliable Transportation with ML and AI at Uber

In recent months, Uber Engineering has shared how we use machine learning (ML), artificial intelligence (AI), and advanced technologies to create more seamless and reliable experiences for our users. From introducing a Bayesian neural network architecture that more accurately estimates trip growth, to our real-time features prediction system, and even our own internal ML-as-a-service platform, Michelangelo, these two fields are critical to supporting Uber’s mission of developing reliable transportation solutions for everyone, everywhere.

Among other areas, we use ML to enable an efficient ride-sharing marketplace, identify suspicious or fraudulent accounts, suggest optimal pickup and dropoff points, and even facilitate more delicious UberEATS delivery by recommending restaurants and predicting wait times so your food can get to you when you need it. But how do we engineer solutions and develop novel algorithms to meet these challenges at scale?

In this article, we address this question by describing how ML and AI empower Uber Engineering to deliver reliable, safe, and seamless user experiences across our products.

Attendees listen to presentations during our Uber Machine Learning Meetup on September 12, 2017.


Uber and the physical world

Our physical world is dynamic. Changes in the environment that affect human behavior (and use of our apps) creates uncertainty, thereby presenting a vast spectrum of learning opportunities.

The complexity of Uber’s problem space goes beyond clicking web links to purchase a car—we actually deal with the car itself. Massive complexity exists when modeling interactions from the movement and routing of cars to optimizing thousands of interconnected entities on each side of a marketplace in real time.

As Uber’s network of users grows, so too does the amount of sensors that help us understand the physical world. We use this information to build maps, optimize our marketplace, and train our self-driving cars.

Uber’s problem space is new and rapidly evolving, especially at this scale and intersection of dimensions:

  • Spatial: at both macro levels (global, regional, and city) and micro levels (riders, cars, and goods)
  • Temporal: from seconds to years
  • Human: involved at every stage, from decision making to decision receipt
  • Active: immediate impact and response on the system being modeled
  • Scale: billions of calculations and thousands of decisions made for millions of riders and drivers every minute

In the video below, we highlight how we approach this complex problem space through ML:

In the following sections, we provide an overview of some of the ML and AI challenges our engineers and data scientists tackle on a daily basis, starting with our Maps team.


Designing Uber Maps

Maps are representations of the physical world built on data, and Uber’s core mission is tied to our ability to leverage this data to refine our mapping technologies. From destination search and prediction, generation of map tiles, ETAs, routing, and up-front fare estimates, maps are integral to every element of our logistics network. This is visually represented by the fact that maps cover more than 95 percent of pixels on the rider and driver app UIs.

Figure 1: A visualization highlights trips originating from Uber’s HQ block in San Francisco.

To enable a more seamless user experience, we pair our mapping technologies with ML. In real time, we identify context-aware suggestions for destinations, taking into account the rider’s current location, time of request, and historical information. Even for users that are new to the platform or a given city, we provide destination suggestions using aggregated information and the conditional probability that any user would select a particular destination given the context of time and space.

Destination Prediction, launched in November 2016, is just one example of how we use ML to improve our maps. Since implementation, we have found that Destination Prediction fulfills over 50 percent of all destination entries, a testament to the accuracy of our algorithms.


Figure 2: Destination prediction ranking leverages various features and along with a gradient boosted decision tree to provide destination suggestions to riders.

The Destination Prediction process is composed of five distinct steps:

  1. Clients call service endpoints.
  2. The service retrieves candidates from the feature store and ranks candidates based on provided latitude/longitude and time of day.
  3. The service returns the top seven destination suggestions, with candidates selected from places people have previously traveled and searched. 
  4. Using a combination of stored information about the physical world and ML, the service produces a list of places that come with a set of features, including a histogram of popular destinations with corresponding request times and current time.
  5. A machine learned scorer then ranks these places, incorporating feedback from another model that trains how to weight each of these components separately.

As previously mentioned, when developing this feature we needed to consider how to make suggestions for new users who did not have a trip history to pull from in a context-aware way. To solve this problem, the Destination Prediction uses aggregate data to identify and suggest popular locations.

In the video below, we provide a summary of the ways ML is used to build better maps:

Maps and routing systems are critical to move our riders and drivers, but how do the pricing and dispatch decisions occur before we even start moving?


Growing the Uber Marketplace

To answer this question, look no further than the Uber Marketplace, the algorithmic brains and decision engine behind our services. A variety of teams in Marketplace, including Forecasting, Dispatch, Personalization, Demand Modeling, and Dynamic Pricing, build and deploy ML algorithms to handle the immense coordination, hyperlocal decision making, and learning needed to to tackle the enormous scale and movement of our transportation network.

In order for our decision engines to be future-aware, we need to be able to “see into the future” as accurately as possible across both space and time. ML enables us to generate spatio-temporal forecasts of supply, demand, and other quantities in real time for up to several weeks ahead. Our algorithms rapidly learn and model the influence of external signals, such as global news events, holidays, and weather on the Marketplace. In the majority of cases, there is limited historical data, and in cases where cities have just launched, there is no data at all. To handle this, we leverage techniques ranging from linear to deep learning models, including long short-term memory (LSTM) networks, to help us forecast the future states of the Marketplace and even predict the onset of extreme events before they occur!

Figure 3: A spatio-temporal stream-processing engine, along with custom models, powers the generation of billions of real-time forecasts every minute.

To deliver a frictionless transportation experience, we use ML to create improved experience flows. Relevant destinations and communications are displayed based on aggregate preferences. For example, novel techniques in uplift modeling and aggregate rider cohorting are used and productionized with TensorFlow to reduce friction points in the rider and driver experience.

Once a rider is ready to take a trip, we also use ML to match riders to drivers. Our dispatch algorithms look at thousands of features in real time to generate more than 30M+ match pair predictions per minute. This is a massively complex problem since we must consider distance, time, traffic, direction, and other real-world dynamics as well as deeply understand the experience drivers and riders desire. A combination of tree-based models, ensembling techniques, and match optimization methods are used to ensure an optimal trip experience. In fact, innovations in our matching algorithms for back-to-back trips have saved many years of time each week in aggregate for our riders and drivers. These techniques are then industrialized to generate batches of 15,000 predictions with a 100ms response time leveraging recent and historical data for every trip request that arrives.

Figure 4: Dispatch matching leverages a thousands of features to generate thousands of predictions in sub-second timing.

Uber’s scale and the highly dynamic nature of modeling the physical world makes our ML challenges unique to the industry. To handle this, we also build systems that automatically learn and improve with scale and algorithms that can detect and address problems in our systems. To grow the Marketplace, we take an idea from inception to production model across multiple teams, generating network efficiency for our riders and drivers while creating a more seamless user experiences in the process.

Figure 5: Uber’s Marketplace organization industrializes machine learning by building  systems that learn with scale.

In the video below, we discuss how we marry ML and the Uber Marketplace to create better experiences for our users:

Our apps generate a variety of data that can be used to solve a multitude of business use cases, but how do we use this data to enable rapid model building across the company?


Building data science platforms

To rapidly build models and algorithms that leverage the massive amounts of aggregated data processed from Uber’s services, we have built several data science platforms. These platforms enable our data scientists to create technologies that increase the effectiveness and efficiency of our products and operations.

Michelangelo, Uber’s ML-as-a-service platform, allows users at the company to query data, generate features, and apply a host of ML models to solve problems in production. Advanced Technologies Group (ATG), who develop our self-driving vehicle technologies, UberEATS, Advertising, and Marketing are just a few of the teams that leverage this powerful platform.

Another example of our ML-enabled technology includes our Natural Language Processing (NLP) platform, which generates and deploys actionable responses for our customer support tickets, chatbots to make driver onboarding easier, and suggested in-app replies. With our commitment to our driver partners, we have been using our NLP platform along with deep learning models to improve the recommended actions and turnaround times for our support tickets.

Figure 6: Our customizable NLP platform enables Data Scientists to rapidly build models for chatbots, sentiment analysis, and rapid response for support tickets.

ML is also used to improve internal engineering production systems. To ensure reliability of our services during all hours, we use it to ease the on-call responsibilities of our engineers through our anomaly detection platform. This tool constantly monitors tens of thousands of service metrics to tease out any spurious alerts. We use a combination of recurrent neural networks and novel feature extraction techniques so the system learns the patterns of these metrics, including day-night and weekday-weekend cycles.

Alerting thresholds are constantly adjusted without human intervention so we are always ahead of any potential business critical outage. When ensuring safe and reliable transportation for millions of people daily, a system outage can have a huge impact.

Figure 7: Uber’s anomaly detection platform helps our engineering teams to maximize the actionable on-call alerts.

In the video below, we showcase Michelangelo, our extreme event forecasting techniques, and other work that leverages our ML research:

Now that we have gone through how we use ML to power our data science platforms, how can we apply these systems to the transportation of the future: self-driving vehicles?


Engineering self-driving vehicles with Uber ATG

Globally, 1.3 million people die in car accidents each year; in the U.S. alone, 94 percent of fatal accidents are a result of human error. Uber is committed to developing technologies that put a dent in these statistics by increasing safety for both current and future users. Powered by ML and AI, self-driving vehicles will play a significant role in making the roads safer.

Figure 8: The anatomy of an Uber ATG self-driving car includes LiDAR sensors, cameras, antenna for GPS positioning, compute and storage, and radar.

To date, Uber’s self-driving vehicles have completed over 30,000 real world passenger trips in places like Pittsburgh, PA and Tempe, AZ. In these environments, Uber’s self-driving vehicles leverage ML to inform how they moves through space, helping our automated systems understand the differences between stationary and moving vehicles, pedestrians, and everything in between.

Figure 9: Multiple inputs are used for training along with intermediate feedback loops to build learning for end-to-end self-driving.

To ensure that our vehicles operate as safely as possible, Uber’s self-driving ML technologies go beyond standard approaches that focus on teaching a car to drive. At a high level, we think of the problem as taking in a variety of inputs to then determine the direction of the steering wheel and speed of the vehicle. However, this approach does not solve the end-to-end driving problem for machines. Instead, we must also consider the number of examples to give the network to adequately capture the dynamics of the physical world. If there is an accident, the model must be explainable, and if the model does not work, it must be debuggable through introspection.

This extra layer of insight ensures that our ML tools and algorithms can safely and reliably navigate Uber’s self-driving vehicles to an accident-free future.

Figure 10: Time-aggregated views create denser information and temporal features for algorithms focused on safety.


The future of ML and AI at Uber

In this article, we have only discussed a small sampling of the exciting challenges we solve with ML and AI at Uber. Among other technologies, we also apply them to our UberEATS, rider and driver cohorting, account fraud and risk detection, unsupervised learning for suggesting pickup points, improving ETA estimates for riders and ETR estimates for drivers, probabilistic modeling for decision risk management, and data pipeline optimizations.

If tackling ML and AI challenges that boggle the limits of scale interest you, apply for a role on our team!

Chintan Turakhia is an engineering manager on Uber’s Marketplace Pricing team, as well as a member of the Bay Area Machine Learning Meetup group.

Waleed Kadous (Principal Engineer on the Sensing & Perception and Maps team), Fran Bell (Data Science Manager on the Business Platform team), and Kumar Chellapilla (Engineering Director for Uber’s Advanced Technologies Group) also contributed to this article.

Source link