SciDB emerged from the [Extremely Large Data Base (XLDB) ] (https://www.xldb.org/about/) Conference first hosted in 2007. The conference was organized by the [SLAC National Accelerator Laboratory’s] (https://www6.slac.stanford.edu/) Scalable Data Systems team to address the gap between current database systems and the needs of data-intensive scientific projects such as the [Large Synoptic Survey Telescope (LSST)] (https://lsst.slac.stanford.edu/) astronomical survey. Mike Stonebraker and Dave Dewitt agreed to lead the development of a database that would fulfill the needs of these projects. A SciDB workshop was hosted at the second XLDB conference in 2008 and code development began the same year. In 2009, Mike Stonebraker and Marilyn Matz co-founded [Paradigm4] (https://www.paradigm4.com/). Paradigm4’s team developed SciDB into a robust commercial software product and continue to develop and improve the two offered versions of SciDB: an open-source, Community Edition, and a proprietary, Enterprise Edition that offers additional functionality and customer-specific solutions.
Run-Length Encoding Null Suppression
SciDB allows users to define how each attribute of an array will be compressed when the array is created. The default is no compression. The additional options are zlib, bzlib, or null filter (null suppression) compression. Since SciDB stores data by attribute, vertically partitioning logical chunks of an array into single-attribute physical chunks, the specified compression is used on a chunk-by-chunk basis. If certain parts of a chunk are accessed more often than others, causing overhead due to decompression and recompression, SciDB can partition a chunk into tiles and compress on a tile-by-tile basis. Run-length encoding is used to compress recurring sequences of data. In addition, SciDB’s storage manager compression engine can split or group logical chunks in order to optimize memory usage while remaining within the limit of the buffer pool’s fixed-size slots.
SciDB supports multi-dimensional arrays. Upon creating an array, the user specifies its dimensions and attributes. Each unique set of dimensions maps to a single cell in the array. Each cell is defined by a collection of attributes, where an attribute represents a single data value. Both dimension and attribute data types can be user-defined. This provides users with the flexibility to specify coordinates and/or classifications that fit their applications. If dimensions are not specified, SciDB creates a data frame - an unordered group of cells. Users can also create temporary arrays, which are stored in-memory and do not keep deltas of changes like non-temporary arrays do.
Foreign keys are not part of the array data model used by SciDB.
SciDB is intended to be used with inexpensive and widely available hardware. This design decision provides users flexibility in maintaining their system, they can add nodes in order to increase capacity and/or performance, and provides users the freedom to choose the hardware that best fits their requirements.
SciDB does not use an index. Instead, it maps chunks of an array to specific nodes by hashing the chunk’s coordinates. SciDB also has a map that allows dimensions specified with user-defined data types to be represented internally as integers, which is called an index in the SciDB documentation.
SciDB logs all queries using [Apache log4cxx] (https://logging.apache.org/log4cxx/latest_stable/).
SciDB is disk-oriented, which allows it to support the large scale of data that may be stored for a single application.
Decomposition Storage Model (Columnar)
SciDB stores data by attribute, vertically partitioning logical chunks of an array into single-attribute physical chunks.
SciDB has a shared-nothing system architecture, which is intended to support the scalability of the system. Query processing occurs at each node on the data at that node. When creating an array, a user may specify the distribution of the array data: whether chunks will be stored primarily on one node or replicated on all nodes.
Academic, Commercial, Open Source