Pinot

Pinot is a distributed relational OLAP datastore written by LinkedIn. It's designed to support large-scale real-time analytics on any given data set. For use cases that are sensitive to data freshness, Pinot is able to directly ingest streaming data from Kafka. For applications that can tolerate a lag time of few hours to a day of data, Pinot is able to ingest batch data from Hadoop. It's also able to dynamically merge data streams that come from both offline and online systems.

Pinot uses a hybrid storage model. It divides tables to segments, which are sets of tuples. Tuples inside each segment are organized in columnar manner. A segment is a basic unit in Pinot: Data from Kafka or Hadoop will be processed and cached locally as segments in Pinot server nodes; It stores metadata, indexes, and necessary zone maps for the tuples inside it; Storage optimizations are applied for tuples in a segment; Indexes are built for each segment; Query plans and optimizations are also generated and performed on a per-segment basis.

The external building blocks of Pinot are Zookeeper and Apache Helix.

History

Pinot was first developed by LinkedIn in 2014 as an internal analytics infrastructure. It originated from the demands to scale out OLAP systems to support low-latency real-time queries on huge volume data. It was later open-sourced in 2015 and entered Apache Incubator in 2018. Pinot was named after the Pinot noir, name of a grape varietal that can produce the most complex wine but is the toughest to grow and process. It's a portrayal of data: powerful but hard to analyze.

Query Interface

Custom API

Pinot uses Pinot Query Language (PQL) as its query interface, which is a subset of SQL. PQL supported query operations are selection, ordering and pagination on selection, filtering, aggregation, and grouping on aggregation. It does not support joins, nested queries, record-level creation, updates, deletion, or any data definition language (DDL).

Grouping on aggregation will have a default truncation of top 10 result tuples. PQL uses TOP n to set this truncation.

Query Execution

Tuple-at-a-Time Model Vectorized Model

Pinot supports both Vectorized Model and Iterator Model. Which one to use depends on the query type and the organization of column data. Bulk optimizations can be made if a target column has been physically reordered. Pinot will also leverage the zone map of each segment to accelerate queries.

Each query will be split into subqueries and be executed in parallel on corresponding segments.

Indexes

BitMap Inverted Index (Full Text)

Pinot supports pluggable Sorted Index, BitMap Index, and Inverted Index. BitMap Index is used to optimize queries on categorical data. Inverted Index is used to support lookup by key word. They are chosen to leverage the features of social data: usually categorical and textual.

Inverted Index can be built based on BitMap. And BitMap Index can be optimized with various compression techniques. Data columns can also be physically reordered to optimize some specific queries in Pinot, since filters on such column usually target a contiguous range of the column data.

Compression

Dictionary Encoding Run-Length Encoding Bitmap Encoding Bit Packing / Mostly Encoding

Pinot leverages various types of encoding to reduce storage overhead. The typical size of a segment varies from a few hundred megabytes up to a few gigabytes. Different data encoding techniques have different specialized physical operators to optimize query execution.

Data Model

Relational

Pinot is a relational datastore. The data type of each attribute can be integers with various length, floating-point numbers, strings, booleans, and arrays. The column type of each attribute can be dimensions, metrics, and time.

Concurrency Control

Not Supported

Pinot moves the execution of queries to segments. There will be no race condition in the server-side query execution since segments are immutable.

Storage Model

Hybrid

Pinot uses a hybrid storage model, which divides rows into segments and stores data inside each segment in Columnar manner. A segment is immutable and typically contains tens of millions of rows. It also stores metadata, indexes, and zone maps for its tuples.

Storage Organization

Heaps

Pinot server nodes store segments in directories of UNIX filesystem. Each such directory contains a metadata file and an index file. The metadata file stores information about tuple columns in the segment. The index file stores indexes for all the columns. The global metadata about segments, including the mapping of a segment to its position, is maintained in controller nodes.

Each segment is indeed a cache of data, which has an expiration time to ensure certain data freshness. The original sources of segments are outside object stores like Kafka and Hadoop.

System Architecture

Shared-Nothing

Pinot consists of four parts: servers, controllers, brokers, and minions. They together support the functionality of data storage, data management, and query processing. A brief introduction to them is as below:

### Servers Servers are responsible for data storage. Pinot stores segments in each server node in a distributed manner. Each segment is uploaded from outside object stores under the control of controller nodes. A segment has multiple replicas and transactions are executed in active-active manner.

### Controllers Controllers are responsible for maintaining global metadata and system state. They are implemented with Apache Helix.

### Brokers Brokers are responsible for query routing. They control the flow of query such as where each query should go to and how to generate the final result with intermediate results from different nodes.

###Minions Minions are responsible for running maintenance tasks, which are usually time consuming and should not influence the running queries.

In a typical process to load a segment, controller nodes first tell server nodes to fetch segments. Server nodes then fetch metadata from Zookeeper and load segment from corresponding external object store nodes. Finally, controller nodes and broker nodes update global metadata and cluster states.

In a typical process to execute a query from a client, Broker nodes first pick a routing table for the query and contact corresponding Server nodes. Server nodes then execute the query based on the segments they have. Results are gathered, merged, and returned to Broker nodes. Broker nodes finally process the result to see if there is error or timeout and then reply to the client.

Checkpoints

Not Supported

Pinot uses replicas to provide fault tolerance and high availability. It also uses redundant controller instances to improve availability.

However, checkpoints are not supported since segments are immutable, which means there will be no write on segments during the execution of queries. But it's possible for a segment to be entirely replaced with a newer version.