How Netflix uses Druid for Real-time Insights to Ensure a High-Quality Experience

Apache Druid is a high performance real-time analytics database. It’s designed for workflows where fast queries and ingest really matter. Druid excels at instant data visibility, ad-hoc queries, operational analytics, and handling high concurrency.” —

As such, Druid fits really well with our use-case. High ingestion rate of event data, with high cardinality and fast query requirements.

Druid is not a relational database, but some concepts are transferable. Rather than tables, we have datasources. As with relational databases, these are logical groupings of data that are represented as columns. Unlike relational databases, there is no concept of joins. As such we need to ensure that whichever columns we want to filter or group-by are included in each datasource.

There are primarily three classes of columns in a datasource — time, dimensions and metrics.

Everything in Druid is keyed by time. Each datasource has a timestamp column that is the primary partition mechanism. Dimensions are values that can be used to filter, query or group-by. Metrics are values that can be aggregated, and are nearly always numeric.

By removing the ability to perform joins, and assuming data is keyed by timestamp, Druid can make some optimizations in how it stores, distributes and queries data such that we’re able to scale the datasource to trillions of rows and still achieve query response times in the 10s of milliseconds.

To achieve this level of scalability, Druid divides the stored data into time chunks. The duration of time chunks is configurable. An appropriate duration can be chosen depending on your data and use-case. For our data and use-case, we use 1 hour time chunks. Data within a time chunk is stored in one or more segments. Each segment holds rows of data all falling within the time chunk as determined by its timestamp key column. The size of the segments can be configured such that there is an upper bound on the number of rows, or the total size of the segment file.

Example of Segments

When querying data, Druid sends the query to all nodes in the cluster that are holding segments for the time chunks within the range of the query. Each node processes the query in parallel across the data it is holding, before sending the intermediate results back to the query broker node. The broker will perform the final merge and aggregation before sending the result set back to the client.

Druid Cluster Overview

Source link