DGraph

Dgraph is a high-scalable, low-latency, and high-throughput distributed graph database. It emphasizes concurrency in distributed environment by minimizing network calls.

History

In July 2015, Manish Rai Jain created Dgraph based on his previous experience at Google -- there he led a project to unite all data structures for serving web search with a backend graph system. The first version v0.1 was released in December 2015, with the goal offering an open source, native, and distributed graph database never changes since then.

Indexes

Hash Table

Dgraph relies on RocksDB to serve PostingLists and provide indexing. Here RocksDB uses key-value format to make radom lookups efficient, and supports faster hash-based index in plain table compared with block-based table.

Query Compilation

Not Supported

No information about query compilation related knowledge is found in Dgraph wiki or discussion.

Data Model

Graph

In Dgraph, a PostingList contains all DirectedEdge corresponding to an Attribute, where each DirectedEdge consists of entity, attribute, value, etc. Note that posting lists are all served via RocksDB in a key-value format (Predicate, Subject) --> PostingList.

Stored Procedures

Supported

Functions can only be applied to indexed attributes. Some pre-defined functions like term matching, inequality and geolocation are provided. Users only need to fill in the parameters to do customized procedures.

Views

Not Supported

No view is found in Dgraph wiki or discussion

Storage Architecture

Hybrid

RocksDB library would decide how data are served out of memory, SSD or disk. In order to proceed processing, updates to posting lists can be stored in memory as an overlay over immutable Posting list. Two separate update layers are provided for replacing and addition/deletion respectively, which allows iteration over Postings in memory without fetching things from disk.

Concurrency Control

Not Supported

Dgraph's main focus is low latency and high throughput. It references the design of Google's Bigtable and Facebook's Tao, and achieves high scalability at the cost of lack of full ACID compliant transactional support. Also, value data versioning is under consideration, and not yet implemented

Storage Model

Custom

Dgraph utilizes RocksDB (an application library rather than a database) to help with key-value storage of posting lists on disk. However, all data handling still happens at Dgraph level rather than RocksDB. RocksDB functions as an interface of disk for Dgraph.

Joins

Hash Join

Dgraph's PostingList structure stores all DirectedEdges corresponding to an Attribute in the format of Attribute: Entity -> sorted list of ValueId, which already consists of all data needed for a join. Therefore, each RPC call to the cluster would result in only one join rather than multiple joins. Join operation is reduced to lookup rather than application layer.

Logging

Logical Logging

Dgraph's logging scheme is close to logical logging. Every mutation is logged and then synced to disk via append-only log. Additionally, two layers of mutation responsible for replacing and addition/deletion respectively can log mutations in memory, allowing periodical garbage collection for dirty posting list via RocksDB. This reduces the need for recreating the posting lists.

Checkpoints

Non-Blocking Consistent

The checkpoint scheme is not mentioned in Dgraph documentation. Therefore, questions are raised in Dgraph slack group. The above answer was provided by developers directly, but details were not revealed.

Isolation Levels

Read Uncommitted

Dgraph does not support transactions at this point. A mutation can be composed of multiple edges where each edge might belong to a different PostingList. Dgraph acquires RWMutex locks at a posting list level. It does not acquire locks across multiple posting lists. For writes, some edges would get written before others, and so any reads which happen while a mutation is going on would read partially committed data. However, there's a guarantee of durability. When a mutation succeeds, any successive reads will read the updated data in its entirety.

System Architecture

Shared-Nothing

Dgraph uses RAFT consensus algorithm for communication between servers. During each term (election cycle), voting is conducted to decide a single leader. Then there is unidirectional RPC communication from leader to followers, but they don't share disk naturally. Each server exposes a GRPC interface, which can then be called by the query processor to retrieve data. Clients must locate the cluster to interact with it. A client can randomly pick up any server in the cluster. If not picking a leader, the request should be rejected, and the leader information is passed along. The client can then re-route it's query to the leader.

Query Interface

GraphQL

Dgraph uses a variation of GraphQL (created by Facebook) called GraphQL+- as its query language because of GraphQL's graph-like query syntax, schema validation and subgraph shaped response. The difference is that GraphQL+- supports graph operations and has removed some inappropriate features considering graph database's special structure.

DGraph Logo
Website

https://dgraph.io/

Source Code

https://github.com/dgraph-io/dgraph

Developer

DGraph Labs, Inc

Country of Origin

US

Start Year

2015

Project Type

Commercial, Open Source

Written in

Go

Supported languages

Go

Derived From

RocksDB

Operating Systems

Linux, OS X

Licenses

Apache v2