TiDB is an open-source distributed database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability. The goal of TiDB is to provide users with a one-stop database solution that covers OLTP (Online Transactional Processing) and Real-time Analytics. TiDB is suitable for various use cases that require high availability, strong consistency and real-time analytics with large-scale data.


TiDB is inspired by the design of Google F1 and Google Spanner, and it supports features like horizontal scalability, strong consistency, and high availability.


Non-Blocking Consistent

TiDB provides consistent checkpoint without blocking. Users can start a transaction and dump all the data from any table. TiDB also provides a way to get consistent data from history versions. The tidb_snapshot system variable is introduced to support reading history data.

Concurrency Control

Multi-version Concurrency Control (MVCC)

The history versions of data are kept because each update / removal creates a new version of the data object instead of updating / removing the data object in-place. But not all the versions are kept. If the versions are older than a specific time, they will be removed completely to reduce the storage occupancy and the performance overhead caused by too many history versions. In TiDB, Garbage Collection (GC) runs periodically to remove the obsolete data versions. GC is triggered in the following way: There is a gc_worker goroutine running in the background of each TiDB server. In a cluster with multiple TiDB servers, one of the gc_worker goroutines will be automatically selected to be the leader. The leader is responsible for maintaining the GC state and sends GC commands to each TiKV region leader.

Data Model


TiDB uses TiKV as the underlying data storage engine, which uses the Key-Value model and can be seen as a huge distributed ordered Map that is of high performance and reliability.

Isolation Levels

Read Committed Repeatable Read

TiDB uses the Percolator transaction model. A global read timestamp is obtained when the transaction is started, and a global commit timestamp is obtained when the transaction is committed. The execution order of transactions is confirmed based on the timestamps. Repeatable Read is the default transaction isolation level in TiDB.


Hash Join

TiDB’s SQL layer currently supports 3 types of distributed join: hash join, sort merge join (when the optimizer thinks even the smallest table is too large to fit in memory and the predicates contain indexed columns, the optimizer would choose sort merge join) and index lookup join. With the columnar storage engine TiFlash, TiDB supports two more join algorithms: Broadcast Hash Join, Shuffled Hash Join


Physical Logging

TiDB uses the Raft consensus algorithm for replication, so it has Raft log. And TiDB also provides binlog to export data from the TiDB cluster.

Query Compilation

Not Supported

Query Execution

Tuple-at-a-Time Model Vectorized Model

In most cases, TiDB processes data tuple by tuple. But in some cases, TiDB uses vectorized execution.

Query Interface


TiDB supports SQL and MySQL dialect.

Storage Architecture


Any durable storage engine stores data on disk and TiKV is no exception. But TiKV doesn’t write data to disk directly. Instead, it stores data in RocksDB and then RocksDB is responsible for the data storage. The reason is that it costs a lot to develop a standalone storage engine, especially a high-performance standalone engine.

Storage Model


TiDB stores its data in the distributed key-value storage engine, TiKV. TiFlash is an extension of TiKV which stores data in columnar format to accelerate the analytical workloads.

Storage Organization


System Architecture


The TiDB cluster has four components: the TiDB server, the PD server,the TiKV server and the TiFlash server.
- The TiDB server is stateless. It does not store data and it is for computing only. TiDB is horizontally scalable and provides the unified interface to the outside through the load balancing components such as Linux Virtual Server (LVS), HAProxy, or F5.
- The Placement Driver (PD) server is the managing component of the entire cluster.
- The TiKV server is responsible for storing data. From an external view, TiKV is a distributed transactional Key-Value storage engine. Region is the basic unit to store data. Each Region stores the data for a particular Key Range which is a left-closed and right-open interval from StartKey to EndKey. There are multiple Regions in each TiKV node. TiKV uses the Raft protocol for replication to ensure the data consistency and disaster recovery. The replicas of the same Region on different nodes compose a Raft Group. The load balancing of the data among different TiKV nodes are scheduled by PD. Region is also the basic unit for scheduling the load balance. - The TiFlash Server is a special type of storage server. Unlike ordinary TiKV nodes, TiFlash stores data by column, mainly designed to accelerate analytical processing.


Virtual Views

TiDB supports Views. Views in TiDB are non-materialized. This means that as a view is queried, TiDB will internally rewrite the query to combine the view definition with the SQL query.

TiDB Logo


Source Code


Tech Docs






Country of Origin


Start Year


Project Type

Commercial, Open Source

Written in


Supported languages

C, C++, Cocoa, D, Eiffel, Erlang, Go, Haskell, Java, Lua, Ocaml, Perl, PHP, Python, Ruby, Scheme, SQL, Tcl

Embeds / Uses


Inspired By

Cloud Spanner

Compatible With


Operating Systems

Hosted, Linux


Apache v2