OceanBase is a distributed, scalable, shared-nothing relational DBMS. The goal of OceanBase is to serve for financial scenarios which are demanding on performance, cost, scalability, and requires database with high availability and strong consistency. It is designed and optimized for OLTP applications on relational structured data, though its shared-nothing structure also supports OLAP applications.
In 2010, OceanBase team leader Zhenkun Yang joined Alibaba. Because of increasing concurrency in Alibaba's business and shortened development cycle to build a database for new transaction, Yang found that original DBMS can't support for rapidly growing workloads in Alibaba. He decided to abandon the traditional DBMS framework and develop a novel DBMS from scratch. At the very beginning, he presented three key principles for their new products: (1) distributed (2) low cost (3) high reliability.
In 2013, Alipay decided to stop using Oracle. Since the alternative choice MySQL can't ensure strong consistency between active server and standby server, OceanBase got its first opportunity. From then on, OceanBase is not open sourced anymore.
From 2014 to 2016, this team spent three years developing OceanBase 1.0. It is the first commercial DBMS which supports distributed transactions.
From 2017, OceanBase started to serve for external customers.
In 2019, OceanBase beat Oracle and won the first place in TPC-C test.
Dictionary Encoding Delta Encoding Run-Length Encoding Prefix Compression
OceanBase uses column compression. It implements several encoding algorithms and it automatically chooses the most suitable one for each column. It adopts column compression to leverage data similarity, such as same data type, same value range, etc.
Multi-version Concurrency Control (MVCC)
OceanBase adopts MVCC to do concurrency control. If the operation involves single partition or multiple partitions on single server node, it will read the snapshot of that server node. If the operation involves partitions on multiple server nodes, it executes distributed snapshot read.
The only available index structure in OceanBase is B+Tree when creating index.
For index range, as OceanBase splits table into partitions, it supports local indexing for local partitions and global indexing for global partitions.
OceanBase also supports secondary index. It combines index keys and table primary key for secondary index.
Read Committed Serializable Snapshot Isolation
From OceanBase 1.0, it supports read committed. Read committed is the default isolation level.
From OceanBase 2.0, it supports snapshot isolation.
From OceanBase 2.2, it supports serializable.
Nested Loop Join Hash Join Sort-Merge Join Index Nested Loop Join
OceanBase supports three kinds of join algorithms: Nested Loop Join, Sort-Merge Join, Hash Join. Sort-Merge Join and Hash Join only works for equi-join scenario while Nested Loop Join works under any join conditions. For nested loop join, OceanBase supports both sequential scan and index scan for inner table. OceanBase also implements Blocked Nested Loop Join.
OceanBase uses physiological logging to records all the modification. Physiological targets on the modification on each single page without specifying the detailed data organization within the page. OceanBase uses Paxos consensus algorithm to synchronize log replicas on different server nodes.
Intra-Operator (Horizontal) Inter-Operator (Vertical)
OceanBase supports both vertical and horizontal parallelism, which increases throughput and reduces latency.
OceanBase implements code generator to translate the logical execution plan into reentrant physical execution plan. Work done by code generator includes translating the logical operators into physical operators, converting the infix expression into suffix expression, leveraging the syntactic information to generate logical information, eliminating the redundant data structure, etc. OceanBase caches these plans to avoid re-compiling them in future.
OceanBase is a disk-oriented distributed shared-nothing DBMS.
From the perspective of storage management, OceanBase is divided into multiple Zones. Each Zone is a cluster of physical server nodes. Several Zones store replicas of same partitions and synchronize logs using Paxos distributed consensus algorithm. Each Zone has multiple server nodes named ObServers. OceanBase supports physical horizontal partitioning. There are two kinds of blocks for data storage, Macro Block
and Micro Block
. Macro Block
(2MB) is the smallest unit for write operation. Micro Block
(16KB before compression) is the smallest unit for read operation.
From the perspective of resource management, each database instance is considered as a tenant in OceanBase. Every tenant is allocated with a unit pool containing units. Each unit is a group of computation and storage resource on ObServers. Each tenant has at most one unit on one ObServer.
OceanBase implements block cache for Micro Block
to accelerate big scan query. It also implements a row cache for rows in block cache to accelerate small get query.
The storage data structure of OceanBase is designed based on LSM-Tree, which is similar to the approach of LevelDB. The data modification is first recorded in MemTable
(dynamic data in memory) using linked list, and the head is linked to the corresponding block in block cache. During the low peak period at night or when the size of MemTable
reaches the threshold, OceanBase will merge the MemTable
into SSTable
(static data in disk) using one of following merge algorithms:
(1) Major Compaction: Read all the static data from disk, merge it with the dynamic data and then write back to disk as new static data. This is the most expensive algorithm and is typically used after DDL operation.
(2) Minor Compaction: Reuse all the Macro Block
s which are not dirty. This is the default algorithm OceanBase adopts.
(3) Alternate Compaction: When one ObServer is about to compact one partition, queries on the merged partition will be sent to ObServers in other Zones storing replicas of the same partition. After compaction the merged Zone warms the cache. When having to merge data during peak period, OceanBase adopts this algorithm. This algorithm is orthogonal to minor compaction and major compaction and should be used in combination with one of them.
(4) Dump: Dump the MemTable
to disk as Minor SSTable
and merge it with the previous dumped Minor SSTable
. When the size of Minor SSTable
exceeds the threshold, OceanBase merges it to SSTable
using aforementioned compaction algorithm. This lightweight approach is used when the dynamic data is significantly less than static data.
N-ary Storage Model (Row/Record) Hybrid
In the first released version of OceanBase, it is designed to only support N-ary Storage Model, i.e. , DBMS stores all attributes for a single tuple contiguously in a page.
From OceanBase 2.0, it supports hybrid storage model. Attributes belong to the same tuple are stored in the same block, but the tuples in the same block are compressed and stored in columnar model.
OceanBase adopts shared-nothing system architecture. It stores replica of each partition on at least three server nodes in different server clusters. Each server node has its own SQL execution engine and storage engine. The storage engine only accesses the local data on that node. The SQL engine accesses the global schema and generates the distributed query plan. Query executors visit the storage engine of each node to distribute and gather data among them to execute the query. For each database instance, it sets one server node as active root server to provide root service like monitoring the health of all the participant nodes. The root service is responsible for load balance, data consistency, error recovery, etc. If this active root server shuts down, OceanBase automatically promotes one standby root server to a new active root server.
https://github.com/oceanbase/oceanbase
https://open.oceanbase.com/docs
OceanBase
2010