lnx Datacake
Easy to use tooling for building eventually consistent distributed data systems in Rust.
"Oh consistency where art thou?" - CF.
β¨
Features - Simple setup, a cluster can be setup and ready to use with one trait.
- Adjustable consistency levels when mutating state.
- Data center aware replication prioritisation.
- Pre-built test suite for
Storage
trait implementations to ensure correct functionality.
The packages
Datacake provides several utility libraries as well as some pre-made data store handlers:
datacake-crdt
- A CRDT implementation based on a hybrid logical clock (HLC) provided in the form of theHLCTimestamp
.datacake-node
- A cluster membership system and managed RPC built on top of chitchat.datacake-eventual-consistency
- Built on top ofdatacake-crdt
, a batteries included framework for building eventually consistent, replicated systems where you only need to implement a basic storage trait.datacake-sqlite
- A pre-built and tested implementation of the datacakeStorage
trait built upon SQLite.datacake-lmdb
- A pre-built and tested implementation of the datacakeStorage
trait built upon LMDB.datacake-rpc
- A fast, zero-copy RPC framework with a familiar actor-like feel to it.
Examples
Check out some pre-built apps we have in the example folder
You can also look at some heavier integration tests here
Single Node Cluster
Here's an example of a basic cluster with one node that runs on your local network, it uses almost all of the packages including:
datacake-node
for the core node membership.datacake-crdt
for the HLCTimestamp and CRDT implementationsdatacake-eventually-consistency
for the eventually consistent replication of state.datacake-rpc
bundled up with everything for managing all the cluster RPC.
use std::net::SocketAddr;
use datacake::node::{Consistency, ConnectionConfig, DCAwareSelector, DatacakeNodeBuilder};
use datacake::eventual_consistency::test_utils::MemStore;
use datacake::eventual_consistency::EventuallyConsistentStoreExtension;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let addr = "127.0.0.1:8080".parse::<SocketAddr>().unwrap();
let connection_cfg = ConnectionConfig::new(addr, addr, Vec::<String>::new());
let node = DatacakeNodeBuilder::<DCAwareSelector>::new(1, connection_cfg)
.connect()
.await
.expect("Connect node.");
let store = node
.add_extension(EventuallyConsistentStoreExtension::new(MemStore::default()))
.await
.expect("Create store.");
let handle = store.handle();
handle
.put(
"my-keyspace",
1,
b"Hello, world! From keyspace 1.".to_vec(),
Consistency::All,
)
.await
.expect("Put doc.");
Ok(())
}
Why does Datacake exist?
Datacake is the result of my attempts at bringing high-availability to lnx unlike languages like Erlang or Go, Rust currently has a fairly young ecosystem around distributed systems. This makes it very hard to build a replicated system in Rust without implementing a lot of things from scratch and without a lot of research into the area to begin with.
Currently, the main algorithms available in Rust is Raft which is replication via consensus, overall it is a very good algorithm, and it's a very simple to understand algorithm however, I'm not currently satisfied that the current implementations are stable enough or are maintained in order to choose it. (Also for lnx's particular use case leader-less eventual consistency was more preferable.)
Because of the above, I built Datacake with the aim of building a reliable, well tested, eventual consistent system akin to how Cassandra or more specifically how ScyllaDB behave with eventual consistent replication, but with a few core differences:
- Datacake does not require an external source or read repair to clear tombstones.
- The underlying CRDTs which are what actually power Datacake are kept purely in memory.
- Partitioning and sharding is not (currently) supported.
It's worth noting that Datacake itself does not implement the consensus and membership algorithms from scratch, instead we use chitchat developed by Quickwit which is an implementation of the scuttlebutt algorithm.
Inspirations and references
- CRDTs for Mortals by James Long
- Big(ger) Sets: Making CRDT Sets Scale in Riak by Russell Brown
- "CRDTs Illustrated" by Arnout Engelen
- "Practical data synchronization with CRDTs" by Dmitry Ivanov
- CRDTs and the Quest for Distributed Consistency
- Logical Physical Clocks and Consistent Snapshots in Globally Distributed Databases
Contributing
Contributions are always welcome, although please open an issue for an idea about extending the main cluster system if you wish to extend or modify it heavily as something's are not always as simple as they seem.
What sort of things could I contribute?
Future Ideas
- Multi-raft framework?
- CASPaxos???
- More storage implementations?