In July 2015, Manish Rai Jain created Dgraph based on his previous experience at Google -- there he led a project to unite all data structures for serving web search with a backend graph system. The first version v0.1 was released in December 2015. In 2020, Dgraph launched hosted Dgraph with Dgraph Cloud. The only native GraphQL graph database available on AWS, GCP, and Azure, Dgraph Cloud has three tiers.
Every mutation upon hitting the database doesn’t immediately make it on disk via BadgerDB. We avoid re-generating the posting list too often, because all the postings need to be kept sorted, and it’s expensive. Instead, every mutation gets logged and synced to disk via append only log files called write-ahead logs. So, any acknowledged writes would always be on disk. This allows us to recover from a system crash, by replaying all the mutations since the last write to Posting List.
Delta Encoding Bitmap Encoding
Dgraph Alpha lets you configure the compression of data on disk using the --badger superflag’s compression option. You can choose between the Snappy and Zstandard compression algorithms, or choose not to compress data on disk.
Multi-version Concurrency Control (MVCC)
Dgraph supports MVCC, Read Snapshots, and Distributed ACID transactions.
Inverted Index (Full Text) Log-Structured Merge Tree
Today we use Badger, an efficient and persistent key-value database we built that’s written in Go. Our blog covers our rationale for choosing Badger in “Why we choose Badger over RocksDB in Dgraph”. Today, Badger is used in Dgraph as well as many other projects.
Dgraph's PostingList
structure stores all DirectedEdges
corresponding to an Attribute
in the format of Attribute: Entity -> sorted list of ValueId
, which already consists of all data needed for a join. Therefore, each RPC call to the cluster would result in only one join rather than multiple joins. Join operation is reduced to lookup rather than application layer.
Dgraph's logging scheme is close to logical logging. Every mutation is logged and then synced to disk via append-only log. Additionally, two layers of mutation responsible for replacing and addition/deletion respectively can log mutations in memory, allowing periodical garbage collection for dirty posting list via BadgerDB. This reduces the need for recreating the posting lists.
Dgraph uses a variation of GraphQL (created by Facebook) called DQL as its query language because of GraphQL's graph-like query syntax, schema validation and subgraph shaped response. The difference is that DQL supports graph operations and has removed some inappropriate features considering graph database's special structure.
BadgerDB library would decide how data are served out of memory, SSD or disk. In order to proceed processing, updates to posting lists can be stored in memory as an overlay over immutable Posting list
. Two separate update layers are provided for replacing and addition/deletion respectively, which allows iteration over Posting
s in memory without fetching things from disk.
Dgraph uses RAFT consensus algorithm for communication between servers. During each term (election cycle), voting is conducted to decide a single leader. Then there is unidirectional RPC communication from leader to followers, but they don't share disk naturally. Each server exposes a GRPC interface, which can then be called by the query processor to retrieve data. Clients must locate the cluster to interact with it. A client can randomly pick up any server in the cluster. If not picking a leader, the request should be rejected, and the leader information is passed along. The client can then re-route it's query to the leader.
https://github.com/dgraph-io/dgraph
DGraph Labs, Inc
2015