Apache Accumulo is a sorted, distributed key-value store based on Google's Bigtable, HDFS and Apache Zookeeper. First designed and developed by a team in NSA, Accumulo's mission is to support big data storing and processing, but at the same time enforce fine-grained data access control. In particular, the team in NSA extends Bigtable in a way that Accumulo can control the access of individual data elements. Accumulo is currently an open source project under Apache v2 license.


*2006* Google publishes "Bigtable: A Distributed Storage System for Structured Data." *January 2008* In order to solve the issue of storing and processing large amounts of data with different sensitivity level, a team of computer scientists and mathematicians in NSA are evaluating various big data technologies. *July 2008* The NSA team decides to begin a new Bigtable implementation. *September 2011* Accumulo comes into a public open source incubator project on Apache Software Foundation. *March 2012* Version 1.3.5 is released. This is the first publicly available version. *April 2012* Version 1.4 is released. *May 2013* Version 1.5 is released. This version incorporates Thrift proxy and table import/export into Accumulo. *May 2014* Version 1.6 is released.


Prefix Compression

Accumulo emploies two compression techniques. The first one is running GZip or LZO on blocks of data that are stored on disk. The second one is relative-key encoding, which allows the common prefixes of keys to be stored only once, and the following keys only need to store the difference.

Data Model

Column Family

Based on Google's Bigtable, Accumulo is a column-oriented DBMS. It stores key-value pairs on disk and always keeps the keys sorted. Values are stored as byte arrays and their size or type are not restricted. Keys consist of three components: a row ID, a column and a time stamp. Keys are sorted first by row IDs, then column, and finally time stamps. This implies that values in the same row will be stored together, and that different rows don't have to contain the same number of columns. Time stamps are used to support multi-versioning of the same key. The column component in the key can be further divided into three fields: column families, column qualifiers and column visibility. Column families are defined by the application designer to group columns with similar functions, so that Accumulo will store them close on disk for faster access. Note that unlike Bigtable and HBase, Accumulo column families need not be declared before use. Column visibility is Accumulo's unique feature; this allows Accumulo to store data with different sensitivity to be stored on the same physical tables.

Query Interface

Custom API Command-line / Shell

Accumulo provides the user with two ways to interact with the system. The first one is to use a client. It supports C++, Python, Java and Ruby. It also has a simple shell that allows the user to examine the content, update configuration settings, insert/update/delete values, etc.

Storage Architecture


Accumulo is a disk-oriented database that relies on HDFS to store data.

Storage Model

Decomposition Storage Model (Columnar)

Accumulo is schema-less column-oriented key-value datastore. As described in Data Model section, key value pairs are stored together, sorted by row ID, column and time stamp.

Stored Procedures

Not Supported

System Architecture


Relying on HDFS to manage files, Accumulo applies a Shared-Nothing architecture. Each node of Accumulo has its own CPU, memory and disk, and owns a shard of data. Since each table are partitioned into tablets and scattered in different nodes, these nodes of Accumulo are also called tablet servers. Accumulo is also capable of splitting a large tablet into two and redistributing them as new data arrive. Unlike some other DBMS, since Accumulo maintains sorted key-value pairs, data are partitioned using sorting instead of hashing. Since disks are faster in sequential access than random access, distribution using sorting allows Accumulo to scan consecutive keys faster than systems that use hashing. However, this incurs the overhead of storing the mapping between portion of sorted set of key-value pairs and tablet servers. This mapping is stored in metadata table.