Elliptics is a fault-tolerant distributed key/value (no-relational) database system. The core storage engine is a hash table. All keys are hashed into 512-bit ids via sha512 algorithm by default.
Elliptics was initially created in 2007 as part of POHMELFS v1. POHMELFS is the abbreviation of the Parallel Optimized Host Message Exchange Layered File System, which is a cache-compatible distributed file system developed by Russian Linux-developer Evgeny Polyakov. It could be viewed as a protocol to share files between file systems on computers via LAN. In 2009 Elliptics seperated from POHMELFS and became a consistent distributed storage system later.
Deterministic Concurrency Control
Elliptics supports basic lock operation on the key-value pair. It uses such key ID to grant the lock, which makes the operation in Elliptics is atomic in the single group. However, there are no locks between the groups for synchronous.
There is no deadlock detection in system level. It is the user's responsibility to prevent deadlock when developing client program.
And Elliptics uses eventual consistency model to maintain data replicas, which means the data in a group may not maintain the same at any time, but they will eventually be synced with others sometimes in the future.
Elliptics own storage supports key-value data model, but it can support SQL database for example by writing an own backend, so that supports relational data storage.
The index structure that exposed to the client is named secondary indexes. Its implementation using STL
std::map<T> template in C++.
Isolations are used in the Elliptics core and used to spawn slaves/workers in a controlled environment. Each worker is a separate process, which can be interpreted as a transaction. Resources accessed by workers can be limited to process level or Group level, which leads to two types of isolation in Elliptics: Process and CGroup. Process isolation cannot be configured while CGroup level isolation can take system configuration as the argument.
For the details of isolation, readers and writers perform differently. Due to the locking mechanism is controlled manually by the clients, if no lock is got during reads, writers can sneak in and readers will return old and new contents accordingly. Because of the design that Elliptics updates replicas in parallel without holding the single 'replica' log, when reading from multiple replicas, it is possible that one reader will receive old content while another one will see the new one already. When it comes to writers, the writers will receive the completion status for every replica in any cases. If the transactions are atomic among the physically distributed replicas, the clients can implement a central entry point which will hold the lock and updated it only after all completion statuses have been received. By this way, transaction rollback can be implemented.
Actually, from the API documents, we can find Elliptics doesn't provide join methods to be called from the client. Due to it is not table-oriented and only guaranteed atomicity level is per object in a single replica. This decision was made for performance and scalability reasons.
However, in the backends internal, merging and joins is available and will vary from backend to backend, which is controlled by Query Execution core. For example, when using a relational database as the backend, Join will be supported, while when using levelDB as the backend, Join will not be supported.
Elliptics uses replication to ensure data availability form the beginning of its design. To use replication features, a group of servers are logically bound together and the servers within the same group all contain the same data.
For recovery logging, Elliptics uses route table to record the set of nodes and the key(ID)-value pairs' ID ranges of each node. A route table at least contains the following fields: timestamp, group id, hex strings to the group number, and address.
Elliptics supports three types of recovery: hash ring recovery (
merge), deep hash ring recovery (
deep_merge), replica recovery (
dc). The merge approach mainly supports data recovery in the same group by moving from one node to another by route tables. Replica recovery is designed to ensure the durability between groups. Elliptics also supports manual recovery operations.
Custom API Command-line / Shell
The API is designed to support C, C++, and Python.
Python APIs are designed to config Elliptics client, including Logger, Config, Node, Session, etc. C++ APIs are designed to configure the client side Elliptics as well, but with less APIs than Python API library. Node is the main controlling structure of Elliptics.
C APIs could be used to config both Client and Server, with the functionalities of creation, configuration, server-side processing, cache and backend. For the client side, everything is built on the asynchronous API model, while we can do both synchronous and asynchronous calls in server side.
Storage architecture is named as Backends in Elliptics. Elliptics has three low-level backends: filesystem (where written objects are stored as files), Eblob (fast append-only storage) and Smack (small compressible objects stored in sorted tables).
Moreover, Elliptics implemented both the generic storage protocol and its own specific protocol. Therefore, data stored in other services can be routed to Elliptics. For example, Elliptics can connect to MySQL servers and trigger some special commands to read/write data into Elliptics.
Elliptics contains three layers: Frontends layers, Elliptics core, and Backends. The role of frontend is to connect with multiple clients to the core to perform the query execution. The Backends are the data storage layers which provide different storage systems.