• Stars
    star
    151
  • Rank 246,057 (Top 5 %)
  • Language
    Go
  • License
    MIT License
  • Created almost 6 years ago
  • Updated almost 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Eventually consistent distributed in-memory cache Go library

bcache

godoc Build Status codecov Go Report Card Maintainability

A Go Library to create distributed in-memory cache inside your app.

Features

  • LRU cache with configurable maximum keys
  • Eventual Consistency synchronization between peers
  • Data are replicated to all nodes
  • cache filling mechanism. When the cache of the given key is not exist, bcache coordinates cache fills such that only one call populates the cache to avoid thundering herd or cache stampede

Why using it

  • if extra network hops needed by external caches like redis or memcached is not acceptable for you
  • you only need cache with simple Set, Get, and Delete operation
  • you have enough RAM to hold the cache data

How it Works

  1. Nodes find each other using Gossip Protocol

Only need to specify one or few nodes as bootstrap nodes, and all nodes will find each other using gossip protocol

  1. When there is cache Set and Delete, the event will be propagated to all of the nodes.

So, all of the nodes will eventually have synced data.

Cache filling

Cache filling mechanism is provided in GetWithFiller func.

When the cache for the given key is not exists:

  • it will call the provided Filler
  • set the cache using value returned by the Filler

Even there are many goroutines which call GetWithFiller, the given Filler func will only called once for each of the key. Cache stampede could be avoided this way.

Quick Start

In server 1

bc, err := New(Config{
	// PeerID:     1, // leave it, will be set automatically based on mac addr
	ListenAddr: "192.168.0.1:12345",
	Peers:      nil, // it nil because we will use this node as bootstrap node
	MaxKeys:    1000,
	Logger:     logrus.New(),
})
if err != nil {
    log.Fatalf("failed to create cache: %v", err)
}
bc.Set("my_key", "my_val",86400)

In server 2

bc, err := New(Config{
	// PeerID:     2, // leave it, will be set automatically based on mac addr
	ListenAddr: "192.168.0.2:12345",
	Peers:      []string{"192.168.0.1:12345"},
	MaxKeys:    1000,
	Logger:     logrus.New(),
})
if err != nil {
    log.Fatalf("failed to create cache: %v", err)
}
bc.Set("my_key2", "my_val2", 86400)

In server 3

bc, err := New(Config{
	// PeerID:     3,// will be set automatically based on mac addr
	ListenAddr: "192.168.0.3:12345",
	Peers:      []string{"192.168.0.1:12345"},
	MaxKeys:    1000,
	Logger:     logrus.New(),
})
if err != nil {
    log.Fatalf("failed to create cache: %v", err)
}
val, exists := bc.Get("my_key2")

GetWithFiller example

c, err := New(Config{
	PeerID:     3,
	ListenAddr: "192.168.0.3:12345",
	Peers:      []string{"192.168.0.1:12345"},
	MaxKeys:    1000,
})
if err != nil {
    log.Fatalf("failed to create cache: %v", err)
}
val, exp,err  := bc.GetWithFiller("my_key2",func(key string) (string, error) {
        // get value from database
         .....
         //
		return value, 0, nil
}, 86400)

Credits