Elliptics network is a fault-tolerant distributed key/value database system. With the default key generation policy, i.e. representing the key into a 512-bits id via sha512 algorithm, it implements hash table object storage.
Elliptics was initially created in 2007 as part of POHMELFS v1. POHMELFS is the abbreviation of the Parallel Optimized Host Message Exchange Layered File System, which is a cache-compatible distributed file system developed by Russian Linux-hacker Evgeny Polyakov. It could be viewed as a protocol to share files between file systems on computers via LAN. In 2009 Elliptics seperated from POHMELFS and became a consistent distributed storage system later. As of 2014, the Elliptics was used in Yandex Map, Disk, Music, Photos and some infrastructure.
The API is designed to support C, C++ and Python.
For current version Elliptics document, only the link to Python API works correctly. The links to C API and C++ API documents are broken. We can retrieve the archived version from archive.org. The version archive.org provided is on date May 11, 2016. Here is the link to C API and C++ API.
Python APIs are designed to config Elliptics client, including Logger, Config, Node, Session, etc. C++ APIs are designed to configure the client side Elliptics as well, but with less APIs than Python API library. Node is the main controlling structure of Elliptics.
C APIs could be used to config both Client and Server, with the functionalities of creation, configuration, server-side processing, cache and backend. For the client side, everything is built on the asynchronous API model, while we can do both synchronous and asynchronous calls in server side.
Elliptics supports basic lock operation on the key-value pair. It uses such key ID to grant the lock, which makes the operation in Elliptics is atomic in the single group. However, there are no locks between the groups for synchronous.
There is no deadlock detection in system level. It is the user's responsibility to prevent deadlock when developing client program.
And Elliptics uses eventual consistency model to maintain data replicas, which means the data in a group may not maintain the same at any time, but they will eventually be synced with others sometimes in the future.
Elliptics uses replication to ensure data availability form the beginning of its design. To use replication features, a group of servers are logically bound together by admin and these servers will contain the same data, i.e. each server keeps the replication to the other node.
For recovery logging, Elliptics uses route table to record the set of nodes and the key(ID)-value pairs' ID ranges of each node. A route table at least contains the following fields: timestamp, group id, hex strings to the group number, and address.
Elliptics supports 3 type of recovery: hash ring recovery a.k.a. merge, deep hash ring recovery a.k.a. deep_merge, replica recovery a.k.a. dc. Merge mainly supports data recovery in the same group by moving from one node to another by route tables. Replica recovery is designed to ensure the durability between groups. Elliptics can also perform manual recovery.
As a NoSQL, key-value database, it does not implement join operation in system level.
There are two types of Isolation using in Elliptics: Process and CGroup.
The index structure that exposed to the client is named secondary indexes. Its implementation using STL
std::map<T> template in C++.
Storage architecture is named as Backends in Elliptics. Elliptics has three low-level backends: filesystem (where written objects are stored as files), Eblob (fast append-only storage) and Smack (small compressible objects stored in sorted tables).
Moreover, Elliptics implemented both the generic storage protocol and its own specific protocol. Therefore, data stored in other services can be routed to Elliptics. For example, Elliptics can connect to MySQL servers and trigger some special commands to read/write data into Elliptics.
Elliptics contains three layers: Frontends layers, Elliptics core, and Backends. The role of frontend is to connect with multiple clients to the core to perform the query execution. The Backends are the data storage layers which provide different storage systems.