• This repository has been archived on 03/May/2020
  • Stars
    star
    126
  • Rank 274,406 (Top 6 %)
  • Language
    Go
  • License
    MIT License
  • Created over 6 years ago
  • Updated over 6 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Bootstrapping MongoDB sharded clusters on Docker Swarm

mongo-swarm

Build Status Docker Image

Mongo-swarm is a POC project that automates the bootstrapping process of a MongoDB cluster for production use. With a single command you can deploy the Mongos, Config and Data replica sets onto Docker Swarm, forming a high-available MongoDB cluster capable of surviving multiple nodes failure without service interruption. The Docker stack is composed of two MongoDB replica sets, two Mongos instances and the mongo-bootstrap service. Mongo-bootstrap is written in Go and handles the replication, sharding and routing configuration.

Overview

Prerequisites

In oder to deploy the MongoDB stack you should have a Docker Swarm cluster made out of eleven nodes:

  • 3 Swarm manager nodes (prod-manager-1, prod-manager-2, prod-manager-3)
  • 3 Mongo data nodes (prod-mongodata-1, prod-mongodata-2, prod-mongodata-3)
  • 3 Mongo config nodes (prod-mongocfg-1, prod-mongocfg-2, prod-mongocfg-3)
  • 2 Mongo router nodes (prod-mongos-1, prod-mongos-2)

You can name your Swarm nodes however you want, the bootstrap process uses placement restrictions based on the mongo.role label. For the bootstrapping to take place you need to apply the following labels:

Mongo data nodes

docker node update --label-add mongo.role=data1 prod-mongodata-1
docker node update --label-add mongo.role=data2 prod-mongodata-2
docker node update --label-add mongo.role=data3 prod-mongodata-3

Mongo config nodes

docker node update --label-add mongo.role=cfg1 prod-mongocfg-1
docker node update --label-add mongo.role=cfg2 prod-mongocfg-2
docker node update --label-add mongo.role=cfg3 prod-mongocfg-3

Mongos nodes

docker node update --label-add mongo.role=mongos1 prod-mongos-1
docker node update --label-add mongo.role=mongos2 prod-mongos-2

Deploy

Clone this repository and run the bootstrap script on a Docker Swarm manager node:

$ git clone https://github.com/stefanprodan/mongo-swarm
$ cd mongo-swarm

$ ./bootstrap.sh

The bootstrap.sh script creates two overlay networks and deploys the mongo stack:

docker network create --attachable -d overlay mongo
docker network create --attachable -d overlay mongos

docker stack deploy -c swarm-compose.yml mongo

Networking

The config and data replica sets are isolated from the rest of the swarm in the mongo overlay network. The routers, Mongos1 and Mongos2 are connected to the mongo network and to the mongos network. You should attach application containers to the mongos network in order to communicate with the MongoDB Cluster.

Persistent storage

At first run, each data and config node will be provisioned with a named Docker volume. This ensures the MongoDB databases will not be purged if you restart or update the MongoDB cluster. Even if you remove the whole stack the volumes will remain on the disk. If you want to delete the MongoDB data and config you have to run docker volume purge on each Swarm node.

Bootstrapping

After the stack has been deploy the mongo-bootstrap container will do the following:

  • waits for the data nodes to be online
  • joins the data nodes into a replica set (datars)
  • waits for the config nodes to be online
  • joins the config nodes into a replica set (cfgrs)
  • waits for the mongos nodes to be online
  • adds the data replica set shard to the mongos instances

You can monitor the bootstrap process by watching the mongo-bootstrap service logs:

$ docker service logs -f mongo_bootstrap

msg="Bootstrap started for data cluster datars members [data1:27017 data2:27017 data3:27017]"
msg="datars member data1:27017 is online"
msg="datars member data2:27017 is online"
msg="datars member data3:27017 is online"
msg="datars replica set initialized successfully"
msg="datars member data1:27017 state PRIMARY"
msg="datars member data2:27017 state SECONDARY"
msg="datars member data3:27017 state SECONDARY"
msg="Bootstrap started for config cluster cfgrs members [cfg1:27017 cfg2:27017 cfg3:27017]"
msg="cfgrs member cfg1:27017 is online"
msg="cfgrs member cfg2:27017 is online"
msg="cfgrs member cfg3:27017 is online"
msg="cfgrs replica set initialized successfully"
msg="cfgrs member cfg1:27017 state PRIMARY"
msg="cfgrs member cfg2:27017 state SECONDARY"
msg="cfgrs member cfg3:27017 state SECONDARY"
msg="Bootstrap started for mongos [mongos1:27017 mongos2:27017]"
msg="mongos1:27017 is online"
msg="mongos1:27017 shard added"
msg="mongos2:27017 is online"
msg="mongos2:27017 shard added"

High availability

A MongoDB cluster provisioned with mongo-swarm can survive node failures and will start an automatic failover if:

  • the primary data node goes down
  • the primary config node goes down
  • one of the mongos nodes goes down

When the primary data or config node goes down, the Mongos instances will detect the new primary node and will reroute all the traffic to it. If a Mongos node goes down and your applications are configured to use both Mongos nodes, the Mongo driver will switch to the online Mongos instance. When you recover a failed data or config node, this node will rejoin the replica set and resync if the oplog size allows it.

If you want the cluster to outstand more than one node failure per replica set, you can horizontally scale up the data and config sets by modifying the swarm-compose.yml file. Always have an odd number of nodes per replica set to avoid split brain situations.

You can test the automatic failover by killing or removing the primary data and config nodes:

root@prod-data1-1:~# docker kill mongo_data1.1....
root@prod-cfg1-1:~# docker rm -f mongo_cfg1.1....

When you bring down the two instances Docker Swarm will start new containers to replace the killed ones. The data and config replica sets will choose a new leader and the newly started instances will join the cluster as followers.

You can check the cluster state by doing an HTTP GET on mongo-bootstrap port 9090.

docker run --rm --network mongo tutum/curl:alpine curl bootstrap:9090

Client connectivity

To test the Mongos connectivity you can run an interactive mongo container attached to the mongos network:

$ docker run --network mongos -it mongo:3.4 mongo mongos1:27017 

mongos> use test
switched to db test

mongos> db.demo.insert({text: "demo"})
WriteResult({ "nInserted" : 1 })

mongos> db.demo.find()
{ "_id" : ObjectId("59a6fa01e33a5cec9872664f"), "text" : "demo" }

The Mongo clients should connect to all Mongos nodes that are running on the mongos overlay network. Here is an example with the mgo golang MongoDB driver:

session, err := mgo.Dial("mongodb://mongos1:27017,mongos2:27017/")

Load testing

You can run load tests for the MongoDB cluster using the loadtest app.

Start 3 loadtest instances on the mongos network:

docker stack deploy -c swarm-loadtest.yml lt

The loadtest app is a Go web service that connects to the two Mongos nodes and does an insert and select:

http.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
    session := s.Repository.Session.Copy()
    defer session.Close()

    log := &AccessLog{
        Timestamp: time.Now().UTC(),
        UserAgent: string(req.Header.Get("User-Agent")),
    }

    c := session.DB("test").C("log")

    err := c.Insert(log)
    if err != nil {
        http.Error(w, err.Error(), http.StatusInternalServerError)
        return
    }

    logs := []AccessLog{}

    err = c.Find(nil).Sort("-timestamp").Limit(10).All(&logs)
    if err != nil {
        http.Error(w, err.Error(), http.StatusInternalServerError)
        return
    }

    b, err := json.MarshalIndent(logs, "", "  ")
    if err != nil {
        http.Error(w, err.Error(), http.StatusInternalServerError)
        return
    }

    w.Header().Set("Content-Type", "application/json")
    w.Write(b)
})

The loadtest service is exposed on the internet on port 9999. You can run the load test using rakyll/hey or Apache bench.

#install hey
go get -u github.com/rakyll/hey

#do 10K requests 
hey -n 10000 -c 100 -m GET http://<SWARM-PUBLIC-IP>:9999/

While running the load test you could kill a Mongos, Data and Config node and see what's the failover impact.

Running the load test with a single loadtest instance:

Summary:
  Total:	58.3945 secs
  Slowest:	2.5077 secs
  Fastest:	0.0588 secs
  Average:	0.5608 secs
  Requests/sec:	171.2490
  Total data:	8508290 bytes
  Size/request:	850 bytes

Response time histogram:
  0.304 [1835]	|∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  0.549 [3781]	|∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  0.793 [2568]	|∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  1.038 [1153]	|∎∎∎∎∎∎∎∎∎∎∎∎
  1.283 [400]	|∎∎∎∎

Running the load test with 3 loadtest instances:

Summary:
  Total:	35.5129 secs
  Slowest:	1.9471 secs
  Fastest:	0.0494 secs
  Average:	0.3223 secs
  Requests/sec:	281.5877
  Total data:	8508392 bytes
  Size/request:	850 bytes

Response time histogram:
  0.239 [5040]	|∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  0.429 [2358]	|∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  0.619 [1235]	|∎∎∎∎∎∎∎∎∎∎
  0.808 [741]	|∎∎∎∎∎∎
  0.998 [396]	|∎∎∎

Scaling up the application from one instance to three instances made the load test 23 seconds faster and the requests per second rate went from 171 to 281.

Monitoring with Weave Scope

Monitoring the load test with Weave Cloud shows how the traffic is being routed by the Docker Swarm load balancer and by the Mongos instances:

Traffic

Weave Scope is a great tool for visualising network traffic between containers and/or Docker Swarm nodes. Besides traffic you can also monitor system load, CPU and memory usage. Recording multiple load test sessions with Scope you can determine what's the maximum load your infrastructure can take without a performance degradation.

Monitoring a Docker Swarm cluster with Weave Cloud is as simple as deploying a Scope container on each Swarm node. More info on installing Weave Scope with Docker can be found here.

Local deployment

If you want to run the MongoDB cluster on a single Docker machine without Docker Swarm mode you can use the local compose file. I use it for debugging on Docker for Mac.

$ docker-compose -f local-compose.yml up -d

This will start all the MongoDB services and mongo-bootstrap on the bridge network without persistent storage.

More Repositories

1

dockprom

Docker hosts and containers monitoring with Prometheus, Grafana, cAdvisor, NodeExporter and AlertManager
5,809
star
2

podinfo

Go microservice template for Kubernetes
Go
5,097
star
3

AspNetCoreRateLimit

ASP.NET Core rate limiting middleware
C#
3,043
star
4

swarmprom

Docker Swarm instrumentation with Prometheus, Grafana, cAdvisor, Node Exporter and Alert Manager
Shell
1,862
star
5

timoni

Timoni is a package manager for Kubernetes, powered by CUE and inspired by Helm.
Go
1,306
star
6

WebApiThrottle

ASP.NET Web API rate limiter for IIS and Owin hosting
C#
1,284
star
7

mgob

MongoDB dockerized backup agent. Runs schedule backups with retention, S3 & SFTP upload, notifications, instrumentation with Prometheus and more.
Go
770
star
8

gitops-istio

A GitOps recipe for Progressive Delivery with Flux v2, Flagger and Istio
644
star
9

k8s-prom-hpa

Kubernetes Horizontal Pod Autoscaler with Prometheus custom metrics
Makefile
560
star
10

kustomizer

An experimental package manager for distributing Kubernetes configuration as OCI artifacts.
Go
279
star
11

MvcThrottle

ASP.NET MVC Throttling filter
C#
226
star
12

istio-gke

Istio service mesh walkthrough (GKE, CloudDNS, Flagger, OpenFaaS)
217
star
13

kube-tools

Kubernetes tools for GitHub Actions CI
Shell
190
star
14

k8s-scw-baremetal

Kubernetes installer for Scaleway bare-metal AMD64 and ARMv7
HCL
177
star
15

flux-local-dev

Flux local dev environment with Docker and Kubernetes KIND
CUE
144
star
16

aspnetcore-dockerswarm

ASP.NET Core orchestration scenarios with Docker
C#
119
star
17

istio-hpa

Configure horizontal pod autoscaling with Istio metrics and Prometheus
Dockerfile
106
star
18

helm-gh-pages

A GitHub Action for publishing Helm charts to Github Pages
Shell
102
star
19

flux-aio

Flux All-In-One distribution made with Timoni
CUE
97
star
20

openfaas-flux

OpenFaaS Kubernetes cluster state management with FluxCD
HTML
79
star
21

dockerdash

Docker dashboard built with ASP.NET Core, Docker.DotNet, SignalR and Vuejs
JavaScript
69
star
22

gitops-linkerd

Progressive Delivery workshop with Linkerd, Flagger and Flux
Shell
64
star
23

gitops-helm-workshop

Progressive Delivery for Kubernetes with Flux, Helm, Linkerd and Flagger
Smarty
61
star
24

hrval-action

Flux Helm Release validation GitHub action
Shell
59
star
25

scaleway-swarm-terraform

Setup a Docker Swarm Cluster on Scaleway with Terraform
HCL
46
star
26

dockes

Elasticsearch cluster with Docker
Shell
45
star
27

flagger-appmesh-gateway

A Kubernetes API Gateway for AWS App Mesh powered by Envoy
Go
44
star
28

kjob

Kubernetes job runner
Go
42
star
29

gitops-progressive-delivery

Progressive delivery with Istio, Weave Flux and Flagger
41
star
30

gh-actions-demo

GitOps pipeline with GitHub actions and Weave Cloud
Go
38
star
31

eks-hpa-profile

An eksctl gitops profile for autoscaling with Prometheus metrics on Amazon EKS on AWS Fargate
35
star
32

faas-grafana

OpenFaaS Grafana
Shell
35
star
33

dockelk

ELK log transport and aggregation at scale
Shell
32
star
34

openfaas-gke

Running OpenFaaS on Google Kubernetes Engine
Shell
30
star
35

prometheus.aspnetcore

Prometheus instrumentation for .NET Core
C#
29
star
36

syros

DevOps tool for managing microservices
Go
28
star
37

gh-actions

GitHub actions for Kubernetes and Helm workflows
Dockerfile
27
star
38

gitops-app-distribution

GitOps workflow for managing app delivery on multiple clusters
Shell
22
star
39

gitops-kyverno

Kubernetes policy managed with Flux and Kyverno
Shell
21
star
40

gitops-appmesh

Progressive Delivery on EKS with AppMesh, Flagger and Flux v2
Shell
19
star
41

flux-workshop-2023

Flux Workshop 2023-08-10
18
star
42

dockerd-exporter

Prometheus Docker daemon metrics exporter
Dockerfile
17
star
43

caddy-builder

Build Caddy with plugins as an Ingress/Proxy for OpenFaaS
Go
16
star
44

jenkins

Continuous integration with disposable containers
Shell
16
star
45

swarm-gcp-faas

Setup OpenFaaS on Google Cloud with Terraform, Docker Swarm and Weave
HCL
16
star
46

nexus

A Sonatype Nexus Repository Manager Docker image based on Alpine with OpenJDK 8
16
star
47

podinfo-deploy

A GitOps workflow for multi-env deployments
14
star
48

es-curator-cron

Docker Alpine image with Elasticsearch Curator cron job
Shell
13
star
49

caddy-dockerd

Caddy reverse proxy for Docker Remote API with IP filter
Shell
12
star
50

openfaas-certinfo

OpenFaaS function that returns SSL/TLS certificate information for a given URL
Go
12
star
51

eks-contour-ingress

Securing EKS Ingress with Contour and Let's Encrypt the GitOps way
Shell
11
star
52

gomicro

golang microservice prototype
Go
10
star
53

ngrok

ngrok docker image
10
star
54

rancher-swarm-weave

Rancher + Docker Swarm + Weave Cloud Scope integration
HCL
9
star
55

kubernetes-cue-schema

CUE schema of the Kubernetes API
CUE
9
star
56

swarm-logspout-logstash

Logspout adapter to send Docker Swarm logs to Logstash
Go
9
star
57

openfaas-promq

OpenFaaS function that executes Prometheus queries
Go
8
star
58

appmesh-eks

AWS App Mesh installer for EKS
Smarty
6
star
59

fninfo

OpenFaaS Kubernetes info function
Go
5
star
60

alpine-base

Alpine Linux base image
Dockerfile
4
star
61

gloo-flagger-demo

GitOps Progressive Delivery demo with Gloo, Flagger and Flux
4
star
62

k8s-grafana

Kubernetes Grafana v5.0 dashboards
Smarty
4
star
63

stefanprodan

My open source portfolio and tech blog
4
star
64

appmesh-dev

Testing eks-appmesh-profile
Shell
3
star
65

AndroidDevLab

Android developer laboratory setup
Batchfile
2
star
66

my-k8s-fleet

2
star
67

RequireJsNet.Samples

RequireJS.NET samples
JavaScript
2
star
68

BFormsTemplates

BForms Visual Studio Project Template
JavaScript
2
star
69

EsnServiceBus

Service Bus and Service Registry implementation based on RabbitMQ
C#
2
star
70

klog

Go
2
star
71

Docker.DotNet

🐳 .NET (C#) Client Library for Docker API
C#
2
star
72

homebrew-tap

Homebrew formulas
Ruby
1
star
73

goc-proxy

A dynamic reverse proxy backed by Consul
Go
1
star
74

evomon

Go
1
star
75

loadtest

Hey load test container
1
star
76

openfaas-colorisebot-gke-weave

OpenFaaS colorisebot on GKE and Weave Cloud
Shell
1
star
77

xmicro

microservice HA prototype
Go
1
star