PolarDB is a commercial cloud based relational database product developed by the Alibaba. It is designed for clients with high read demand. PolarDB is compatible with two popular databases: MySQL, PostgreSQL. It has three layers. Users interact with database through computing layer. PolarFS is a distributed file system and polarStorage as storage level. PolarDB uses InnoDB as storage engine.


PolarDB was first released in September, 2017 and was commercialized officially in April, 2018. At the same time, Alibaba Cloud shared a talk about polarDB at Conference on Data Engineering (ICDE).

Query Interface


PolarDB supports the standard query interface SQL as MySQL does. It adds parallel query as a feature. To enable parallel query, there are multiple ways: set max_parallel_degree = n set force_parallel_mode = on SELECT /*+ PARALLEL() */ * FROM ... SELECT /*+ PARALLEL(n) */ * FROM ...

Concurrency Control

Multi-version Concurrency Control (MVCC)

Primary node (read-write) and replica nodes (read-only) communicate through message sender and ack receiver. Each of them have a buffer pool. Replica would update itself during runtime redo operation. It uses difference between written log sequence number and replica's applied log sequence number as replica lag to keep track of the version that replica is holding. In PolarFS, it uses parallel raft protocol to coordinate multiple data chunk servers. ParallelRaft is a consensus protocol inherited from Raft but it allows out-of-order I/O completion tolerance capabilities of database.


Nested Loop Join

Each worker will first scan and join their own partition. PolarDB will merge each worker's join result and return to clients.

Isolation Levels

Repeatable Read

PolarDB maintains read view, which is an array of read write operations when a transaction starts. Replica nodes are read-only and therefore do not have read write operations. Primary node will send an initial read view to replica as part of handshake. It will be updated at redo.



Similar to MySQL, b+ tree is the default index data structure. B+Tree is ordered by primary key. One optimization related to B+Tree is that polarDB will record the location of last insertion to facilitate insertion next time. During parallel query execution which is a feature of polarDB, it will partition the B+tree to multiple workers. Each worker can only see its own partition. When one worker finished with one partition, it will automatically attach to a new partition.

Storage Architecture


PolarDB is disk-oriented as MySQL. Primary and replica nodes have buffer pool and they can access data and log in shared memory. Primary has the right to flush the page during normal operation. After redo in recovery, replicas will flush pages to disk. Primary node, after receiving the read view from new master, will also write pages to disk.



All modifications before a checkpoint must have been made to data chunks. Logs of changes committed after a checkpoint are also allowed to appear in a checkpoint. During recovery, it will choose the newest checkpoint instead of the longest one.

System Architecture


Within shared-disk, PolarDB has multiple data chunk servers which consist of chunks of data. Each chunk server has its own stand-alone non-volatile memory SSD disk. Compute nodes (database server) read and write to the disk via remote direct memory access (RDMA).

Hardware Acceleration


PolarDB uses RDMA to connect storage nodes and compute nodes. This removes the bottleneck for I/O performance.

Storage Organization


Each chunk server has a write ahead log (WAL). Any modification to a chunk server will be appended in log before updating the chunk. After primary node (read-write) modifies some pages, it will send logs to shared memory where replica could access. Replicas have log apply threads that modify their versions during redo.

PolarDB Logo




Country of Origin


Start Year


Project Type


Compatible With

MySQL, PostgreSQL

Operating Systems