• Stars
    star
    152
  • Rank 244,685 (Top 5 %)
  • Language
    Erlang
  • License
    MIT License
  • Created over 12 years ago
  • Updated over 5 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Atomic distributed "check and set" for short-lived keys

locker - atomic distributed "check and set" for short-lived keys

locker is a distributed de-centralized consistent in-memory key-value store written in Erlang. An entry expires after a certain amount of time, unless the lease is extended. This makes it a good practical option for locks, mutexes and leader election in a distributed system.

In terms of the CAP theorem, locker chooses consistency by requiring a quorum for every write. For reads, locker chooses availability and always does a local read which can be inconsistent. Extensions of the lease is used as an anti-entropy mechanism to eventually propagate all leases.

It is designed to be used inside your application on the Erlang VM, using the Erlang distribution to communicate with masters and replicas.

Operations:

  • locker:lock/2,3,4
  • locker:update/3,4
  • locker:extend_lease/3
  • locker:release/2,3
  • locker:wait_for/2
  • locker:wait_for_release/2

Writes

To achieve "atomic" updates, the write is done in two phases, voting and commiting.

In the voting phase, the client asks every master node for a promise that the node can later set the key. The promise is only granted if the current value is what the client expects. The promise will block any other clients from also receiving a promise for that key.

If the majority of the master nodes gives the client the promise (quorum), the client can go ahead and commit the lock. If a positive majority was not reached, the client will abort and delete any promises it received.

Reads

locker currently only offers dirty reads from the local node. If we need consistent reads, a read quorum can be used.

Failure

"So, this is all fine and good, but what happens when something fails?". To make the implementation simple, there is a timeout on every promise and every lock. If a promise is not converted into a lock in time, it is simply deleted.

If the user process fails to extend the lease of its lock, the lock expires without consulting any other node. If a node is partitioned away from the rest of the cluster, the lock might expire too soon resulting in reads returning the empty value. However, a new lock cannot be created as a quorum cannot be reached.

Calling locker:wait_for_release/2 will block until a lock expires, either by manual release or from a expired lease.

Lease expiration

Synchronized clocks is not required for correct expiration of a lease. It is only required that the clocks progress at roughly the same speed. When a lock is created or extended, the node will set the expiration to now() + lease_length, which means that the user needs to account for the skew when extending the lease. With leases in the order of minutes, the skew should be very small.

When a lease is extended, it is replicated to the other nodes in the cluster which will update their local copy if they don't already have the key. This is used to bring new nodes in sync.

Replication

A locker cluster consists of masters and replicas. The masters participate in the quorum and accept writes from the clients. The masters implements strong consistency. Periodically the masters send off their transaction log to the replicas where it is replayed to create the same state. Replication is thus asynchronous and reads on the replicas might be inconsistent. Replication is done in batch to improve performance by reducing the number of messages each replica needs to handle. Calling locker:wait_for/2 after a succesful write will block until the key is replicated to the local node. If the local node is a master, it will return immediately.

Adding new nodes

New nodes may first be added as replicas to sync up before being promoted to master. Every operation happening after the replica joined, will be also propagated to the replica. The time to catch up is then determined by how long it takes for all leases to be extended.

New nodes might also be set directly as masters, in which case the new node might give negative votes in the quorum. As long as a quorum can be reached, the out-of-sync master will still accept writes and catch up as fast as a replica.

Using locker:set_nodes/3 masters and replicas can be set across the entire cluster in a "send-and-pray" operation. If something happens during this operation, the locker cluster might be in an inconsistent state.

More Repositories

1

eredis

Erlang Redis client
Erlang
626
star
2

etest

A lightweight, convention over configuration test framework for Erlang
Erlang
70
star
3

homebrew-unityversions

Unity versions casks
Ruby
53
star
4

etest_http

etest Assertions around HTTP (client-side)
Erlang
52
star
5

eflatbuffers

Elixir/Erlang flatbuffers implementation
Elixir
52
star
6

Paket.Unity3D

An extension for the Paket dependency manager that enables the integration of NuGet dependencies into Unity3D projects.
F#
42
star
7

kafkaesque

A JRuby-based event stream processing framework for Kafka.
Ruby
27
star
8

bagger

Ruby
18
star
9

atlas-unity

a Unity 3D gradle plugin
Groovy
14
star
10

facebook-signed-request

Ruby Gem which parses and validates Facebook signed requests
Ruby
11
star
11

erlang_prelude

A collection of Erlang modules solving often-encountered problems
Erlang
11
star
12

unity-version-manager

Tool that just manipulates a link to the current unity version
Ruby
8
star
13

circuit_breaker

Basic circuit breaker in Ruby to prevent long running external calls from blocking an application
Ruby
6
star
14

ebloomd

Bloom Filter Server in Erlang
Erlang
5
star
15

nils

Elixir migration orchestration
Elixir
4
star
16

atlas-rust

A simple gradle plugin to build rust library crates
Groovy
3
star
17

elli_access_log

Erlang
3
star
18

atlas-appcenter

Gradle plugin for HockeyApp uploads
Groovy
2
star
19

atlas-build-unity

a gradle companion plugin for the wooga internal unity build system
Groovy
2
star
20

reconnaissance

Erlang
2
star
21

atlas-slack

a slack plugin for Gradle.
Groovy
2
star
22

atlas-unity-version-manager

a gradle plugin to manage unity version installation
Groovy
2
star
23

play-deliver

Upload screenshots, metadata of your app to the Play Store using a single command
Python
2
star
24

atlas-jenkins-pipeline

Jenkins Pipeline shared Library
Groovy
1
star
25

elli_gzip_request

Elli middleware to accept request with a gzip content-encoding.
Erlang
1
star
26

helpshift.gem

Rubygem to interface with the Helpshift API
Ruby
1
star
27

atlas-build-unity-ios

ios build plugin for unity exported Xcode projects
Groovy
1
star
28

spock-github-extension

Spock github test repository extension
Groovy
1
star
29

jenkins-metrics

A simple CLI tool to calculate basic Jenkins Job KPI's
Rust
1
star
30

homebrew-unityversions-beta

Unity versions beta casks
Ruby
1
star
31

atlas-github

Gradle plugin to publish artifacts to github
Groovy
1
star
32

NUnit3-to-NUnit2-Format-Converter

Python Script to convert NUnit3 report xml files to NUnit2 format
Python
1
star
33

atlas-upm-artifactory

Gradle plugin for packaging and publishing UPM projects into an artifactory repository.
Groovy
1
star