• Stars
    star
    636
  • Rank 70,723 (Top 2 %)
  • Language
    Rust
  • License
    Other
  • Created about 7 years ago
  • Updated 10 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Database for L2 orderbook

tectonicdb

build crate.io doc.rs Minimum Rust version Rust stable

crate docs.rs crate.io
tectonicdb doc.rs crate.io
tdb-core doc.rs crate.io
tdb-server-core doc.rs crate.io
tdb-cli doc.rs crate.io

tectonicdb is a fast, highly compressed standalone database and streaming protocol for order book ticks.

Why

  • Uses a simple and efficient binary file format: Dense Tick Format(DTF)

  • Stores order book tick data tuple of shape: (timestamp, seq, is_trade, is_bid, price, size).

  • Sorted by timestamp + seq

  • 12 bytes per orderbook event

  • 600,000 inserts per thread second

Installation

There are several ways to install tectonicdb.

  1. Binaries

Binaries are available for download. Make sure to put the path to the binary into your PATH. Currently only build is for Linux x86_64.

  1. Crates
cargo install tectonicdb

This command will download tdb, tdb-server, dtftools binaries from crates.io and build locally.

  1. GitHub

To contribute you will need the copy of the source code on your local machine.

git clone https://github.com/0b01/tectonicdb
cd tectonicdb
cargo build --release
cargo run --release tdb-server

The binaries can be found under target/release.

How to use

It's very easy to setup.

./tdb-server --help

For example:

./tdb-server -vv -a -i 10000
# run the server on INFO verbosity
# turn on autoflush for every 10000 inserts per orderbook

Configuration

To config the Google Cloud Storage and Data Collection Backend integration, the following environment variables are used:

Variable Name Default Description
TDB_HOST 0.0.0.0 The host to which the database will bind
TDB_PORT 9001 The port that the database will listen on
TDB_DTF_FOLDER db Name of the directory in which DTF files will be stored
TDB_AUTOFLUSH false If true, recorded orderbook data will automatically be flushed to DTF files every interval inserts.
TDB_FLUSH_INTERVAL 1000 Every interval inserts, if autoflush is enabled, DTF files will be written from memory to disk.
TDB_GRANULARITY 0 Record history granularity level
TDB_LOG_FILE_NAME tdb.log Filename of the log file for the database
TDB_Q_CAPACITY 300 Capacity of the circular queue for recording history

Client API

Command Description
HELP Prints help
PING Responds PONG
INFO Returns info about table schemas
PERF Returns the answercount of items over time
LOAD [orderbook] Load orderbook from disk to memory
USE [orderbook] Switch the current orderbook
CREATE [orderbook] Create orderbook
GET [n] FROM [orderbook] Returns items
GET [n] Returns n items from current orderbook
COUNT Count of items in current orderbook
COUNT ALL Returns total count from all orderbooks
CLEAR Deletes everything in current orderbook
CLEAR ALL Drops everything in memory
FLUSH Flush current orderbook to "Howdisk can
FLUSHALL Flush everything from memory to disk
SUBSCRIBE [orderbook] Subscribe to updates from orderbook
EXISTS [orderbook] Checks if orderbook exists
SUBSCRIBE [orderbook] Subscribe to orderbook

Data commands

USE [dbname]
ADD [ts], [seq], [is_trade], [is_bid], [price], [size];
INSERT 1505177459.685, 139010, t, f, 0.0703620, 7.65064240; INTO dbname

Monitoring

TectonicDB supports monitoring/alerting by periodically sending its usage info to an InfluxDB instance:

    --influx-db <influx_db>                        influxdb db
    --influx-host <influx_host>                    influxdb host
    --influx-log-interval <influx_log_interval>    influxdb log interval in seconds (default is 60)

As a concrete example,

...
$ influx
> CREATE DATABASE market_data;
> ^D
$ tdb --influx-db market_data --influx-host http://localhost:8086 --influx-log-interval 20
...

TectonicDB will send field values disk={COUNT_DISK},size={COUNT_MEM} with tag ob={ORDERBOOK} to market_data measurement which is the same as the dbname.

Additionally, you can query usage information directly with INFO and PERF commands:

  1. INFO reports the current tick count in memory and on disk.

  2. PERF returns recorded tick count history whose granularity can be configured.

Logging

Log file defaults to tdb.log.

Testing

export RUST_TEST_THREADS=1
cargo test

Tests must be run sequentially because some tests depend on dtf files that other tests generate.

Benchmark

tdb client comes with a benchmark mode. This command inserts 1M records into the tdb.

tdb -b 1000000

Using dtf files

Tectonic comes with a commandline tool dtfcat to inspect the file metadata and all the stored events into either JSON or CSV.

Options:

USAGE:
    dtfcat [FLAGS] --input <INPUT>

FLAGS:
    -c, --csv         output csv
    -h, --help        Prints help information
    -m, --metadata    read only the metadata
    -V, --version     Prints version information

OPTIONS:
    -i, --input <INPUT>    file to read

As a library

It is possible to use the Dense Tick Format streaming protocol / file format in a different application. Works nicely with any buffer implementing the Write trait.

Requirements

TectonicDB is a standalone service.

  • Linux

  • macOS

Language bindings:

  • TypeScript

  • Rust

  • Python

  • JavaScript

Additional Features

  • Usage statistics like Cloud SQL

  • Commandline inspection tool for dtf file format

  • Logging

  • Query by timestamp

Changelog

  • 0.5.0: InfluxDB monitoring plugin and improved command line arguments
  • 0.4.0: iterator-based APIs for handling DTF files and various quality of life improvements
  • 0.3.0: Refactor to async

More Repositories

1

SimGAN-Captcha

Solve captcha without manually labeling a training set
Jupyter Notebook
427
star
2

xprite-editor

Pixel art editor
Rust
173
star
3

macintoshplus

Vaporwave aesthetics generator
Python
88
star
4

tensorscript

shapechecking neural net DSL using Hindley-Milner type system(compiles to pytorch as proof of concept)
Rust
50
star
5

CommNet

PyTorch implementation of CommNet
Python
35
star
6

bodine

It finds best synonyms from Google Books when you press a hotkey
Python
31
star
7

recurrent-autoencoder

archiving old code
Jupyter Notebook
23
star
8

rasta

guitar/bass effects unit
Rust
23
star
9

autocompletex

redis autocomplete for elixir
Elixir
23
star
10

AutoBassTab

Jupyter Notebook
20
star
11

crabs

a crab fighting game
Rust
18
star
12

rbtree

Red Black Tree in Elixir
Elixir
18
star
13

walkingbass

this program generates jazz/swing style walking bass based on chords
Python
11
star
14

tigris-and-euphrates

Rust implementation of T&E Board game
Rust
9
star
15

dyn-grammar

just learning about LL(1) parser
Rust
9
star
16

fastwfc-rs

Rust bindings to libfastwfc
Rust
8
star
17

aatree

AA Tree in elixir
Elixir
5
star
18

portal

portal and portal 2
Python
5
star
19

autobasstab-web

JavaScript
4
star
20

alloc-counter

fork of alloc-counter
Rust
3
star
21

zset

sorted set in elixir with redis api
Elixir
3
star
22

bittrex-orderbook-importer

imports orderbook updates from bittrex
JavaScript
3
star
23

pybittrex

Python bindings for bittrex API v1.1
Python
2
star
24

tail2

Rust
2
star
25

umddropcatcher

dropcatch courses
Elixir
2
star
26

reggae-lang

incremental programming langauge that compiles to LLVM(vaporware)
Rust
1
star
27

cargo-web-bug

Rust
1
star
28

solenoid

Ethereum EVM compiler and JIT (abandonware)
Rust
1
star
29

xeo

Rust
1
star
30

pagemon

monitor pages for changes
Haskell
1
star
31

ordinal-cash

Rust
1
star
32

potodds

Rust
1
star
33

spandex-rs

Incremental Computation for Rust (BROKEN - DO NOT USE)
Rust
1
star
34

tectonicdb-monitor

monitor tectonic db
HTML
1
star