In July 2015, Manish Rai Jain (founder of Dgraph Labs), came up with the idea of Dgraph from his previous experience at Google -- there he led a project to unite all data structures for serving web search with a backend graph system. The first version v0.1 was released in December 2015, with the goal offering an open source, native and distributed graph database never changes since then. VC investors like Blackbird Ventures and Bain Capital Ventures led a $1.45 million funding round in Dgraph in May 2016, which brings a good funding round for this amazing start-up.
Dgraph uses a variation of GraphQL (created by Facebook) called GraphQL+- as its query language because of GraphQL's graph-like query syntax, schema validation and subgraph shaped response. The difference is that GraphQL+- supports graph operations and has removed some inappropriate features considering graph database's special structure.
Dgraph's main focus is low latency and high throughput. It references the design of Google's Bigtable and Facebook's Tao, and achieves high scalability at the cost of lack of full ACID compliant transactional support. Also, value data versioning is under consideration, and not yet implemented
Dgraph's logging scheme is close to logical logging. Every mutation is logged and then synced to disk via append-only log. Additionally, 2 layers of mutation responsible for replacing and addition/deletion respectively can log mutations in memory, allowing periodical garbage colleciton for dirty posting list via RocksDB. This reduces the need for recreating the posting lists.
Dgraph's PostingList
structure stores all DirectedEdges
corresponding to an Attribute
in the format of Attribute: Entity -> sorted list of ValueId
, which already consists of all data needed for a join. Therefore, each RPC call to the cluster would result in only one join rather than multiple joins. Join operation is reduced to lookup rather than application layer.
Dgraph does not support transactions at this point. A mutation can be composed of multiple edges where each edge might belong to a different PostingList
. Dgraph acquires RWMutex
locks at a posting list level. It does not acquire locks across multiple posting lists. For writes, some edges would get written before others, and so any reads which happen while a mutation is going on would read partially committed data. However, there's a guarantee of durability. When a mutation succeeds, any successive reads will read the updated data in its entirety.
Functions can only be applied to indexed attributes. Some pre-defined functions like term matching, inequality and geolocation are provided. Users only need to fill in the parameters to do customized procedures.
RocksDB library would decide how data are served out of memory, SSD or disk. In order to proceed processing, updates to posting lists can be stored in memory as an overlay over immutable Posting list
. Two separate update layers are provided for replacing and addition/deletion respectively, which allows iteration over Posting
s in memory without fetching things from disk.
Dgraph uses RAFT consensus algorithm for communication between servers. During each term (election cycle), voting is conducted to decide a single leader. Then there is unidirectional RPC communication from leader to followers, but they don't share disk naturally. Each server exposes a GRPC interface, which can then be called by the query processor to retrieve data. Clients must locate the cluster to interact with it. A client can randomly pick up any server in the cluster. If not picking a leader, the request should be rejected, and the leader information is passed along. The client can then re-route it's query to the leader.