Prometheus is an open-source time series database developed by SoundCloud, and serves as the storage layer for the Prometheus monitoring system. Inspired by the Gorilla system at Facebook, Prometheus is specially designed for monitoring and metric collection. Prometheus contains a user-defined multi-dimensional data model and a query language on multi-dimensional data called PromQL. Apart from local disk storage, Prometheus also has remote storage integrations via Protocol Buffer.
Prometheus is written in Go and supports Go/Java/Ruby/Python clients. Prometheus also has unofficial client bindings for many other language.
Prometheus was started in 2012 in SoundCloud as an open-source project for system monitoring, therefore the system requires an efficient and fault-tolerant storage layer for incoming metrics as well as metadata for these metrics. Thus they built the Prometheus time series database as the backend for the whole monitoring platform.
The Prometheus time series database has gone through three major versions. Prometheus v1 is a basic implementation, where all time series data and label metadata are stored in LevelDB. V2 addressed several shortcomings of v1 by storing time series data on a per time series basis and adoption of delta-of-delta compression. V3 made further improvements by implementing a write ahead log to handle crashes and better data block compaction.
Since the underlying data representation of each series in Prometheus is a list of key-value pair, the storage model of Prometheus is quite similar to normal key-value databases, e.g. LevelDB and RocksDB. Actually, prior to Prometheus 2.0, its storage engine was LevelDB.
Prometheus stores data as time series. A time series is defined by a metric and a set of key-value labels. A data sample is a data point at a given timestamp, including a
float64 value and a unix timestamp. Therefore a time series can be formally defined as
Prometheus supports the following metric types:
Prometheus supports periodic checkpoints, which is done every two hours by default. Checkpoints in Prometheus is done by compacting write ahead logs in a given time range. All the checkpoints are stored in the same directory with the name
checkpoint.xxx, where the
xxx suffix is a number monotonically increase with time. Therefore when Prometheus recovers from crash, it can replay the all the checkpoints in the checkpoint directory in the same order as their suffixes.
Prometheus has two storage architectures:
Prometheus is append-only and does not support transactions.
Prometheus has a custom query language called PromQL, which is specially designed to query time-series data. Prometheus query interface also implements math/datetime related functions as well as aggregation. Prometheus also provides a RESTful interface over HTTP.
Since each sample in Prometheus can be viewed as a tuple of a timestamp and a numerical value, therefore Prometheus has different compression techniques for timestamps and value respectively.
For the compression of timestamps, the algorithm that Prometheus uses is similar to that of Facebook's Gorilla time-series database, called delta-of-delta compression algorithm. For example, given a series of timestamp
1496163646, 1496163676, 1496163706, 1496163735, 1496163765, storing this timestamps in raw bytes are not efficient since these values only change very little over time. A better approach is to encode timestamp with deltas, i.e.
1496163646, +30, +30, +29, +30. Due to the fact that metrics usually come in a constant rate, Prometheus adopts the delta-of-delta encoding, i.e.
1496163646 +30 +0 -1 +1. If metrics come in a constant rate, then most of these delta-of-deltas will become 0.
In addition to timestamp compression, Prometheus also compresses numerical values. Its approach is similar to existing floating point compression algorithms. The idea is that the XOR value of neighboring floating point data in a time series often has clustered 0s. Therefore the compression algorithm leverages this fact to compress numerical values.
With regard to integration with remote storage engines, Prometheus uses a snappy-compressed protocol buffer encoding over HTTP for both read and write protocols.
Since the query in Prometheus is quite similar to that in key-value databases, thus the query execution model is tuple-at-a-time model.
Prometheus supports flexible configuration to choose backend storage service. Prometheus itself maintains a on-disk checkpoint of series data and also supports remote read/write to other storage systems, making Prometheus's integration with other systems much easier. Also, Prometheus supports custom Webhook receivers to send alert notifications, e.g. AWS SNS and IRC bot.
Prometheus ensures data durability by write ahead logging (WAL). The format of how logs are stored on disk in Prometheus is largely borrowed from LevelDB/RocksDB.
A typical data point record in Prometheus's WAL is a triple
(series_id, timestamp, value).
Prometheus does not have complex data structures for maintaining indexes. Indexes are simply symbol tables that maps metrics/labels to offsets in Prometheus trunk files.