BlazingSQL is a distributed GPU-accelerated SQL engine with data lake integration, where data lakes are huge quantities of raw data that are stored in a flat architecture. It is ACID-compliant. BlazingSQL targets ETL workloads and aims to perform efficient read IO and OLAP querying. BlazingDB refers to the company and BlazingSQL refers to the product. It is currently under active development with 15 employees. BlazingDB has offices in San Franscisco and Peru.
BlazingSQL started as a GPU table joiner for multi-terabyte databases. The Aramburu brothers, Rodrigo and Felipe, founded a company in 2013 that provided analytical solutions and needed to speed up joins for pension fraud detection. The system is closed-source with a free community binary. It integrates with the open-source open GPU data science initiative, RAPIDS, which relies on NVIDIA GPUs.
BlazingSQL can utilize multiple GPUs distributed across different servers. BlazingSQL also has a distributed cache. Upon reading from the data lake, data is cached on the worker nodes. If a worker node A requests data that was recently read from the data lake by another worker node B, worker node B is able to push the desired data to worker node A.
Virtual Views Materialized Views
BlazingSQL supports both virtual and materialized views. Materialized views are currently not persistent.
Code Generation JIT Compilation
BlazingSQL uses RAPIDS libraries, which themselves use NVIDIA's CUDA. CUDA has support for JIT and code generation.
BlazingSQL is hardware-accelerated with NVIDIA GPUs. Relevant columnar data is compressed, cached and sent to the GPU. The GPUs are used to speed up transforms, predicates, running predicates while skipping metadata, and to perform accelerated joins. This is accomplished by hooking into the cu* libraries that are part of the RAPIDS initiative, which are themselves bindings around NVIDIA's CUDA libraries.
Decomposition Storage Model (Columnar)
BlazingSQL does not write data. It reads compressed data directly from the data lake and transmits relevant columns to the GPU. On the GPU, data is represented as a GPU DataFrame (GDF). GDFs are built on top of Apache Arrow, which is a columnar in-memory format.
BlazingSQL does not write data. It reads directly from the data lake, loading it into GPU data frames that can be shared with other BlazingSQL worker nodes through interprocess communication. Worker nodes do not have to be on the same machine, they can utilize different machines and different GPUs. BlazingSQL handles concurrency for the generation of result sets. However, the user is responsible for ensuring that the data in the data lake is internally consistent and free of corruption when it is queried.
Dictionary Encoding Delta Encoding Run-Length Encoding Bit Packing / Mostly Encoding
Historically, BlazingSQL supported compression and decompression on the GPU with bit-packing, delta encoding, dictionary encoding, and run-length encoding. This is currently disabled alongside its custom Simpatico file format. As of November 2018, it operates directly on Apache Parquet, CSV, and ORC. BlazingSQL does not currently write data and instead reads it from the data lake. It is able to operate directly on compressed data.