• This repository has been archived on 04/Sep/2018
  • Stars
    star
    168
  • Rank 225,507 (Top 5 %)
  • Language
    Go
  • License
    Apache License 2.0
  • Created over 8 years ago
  • Updated about 6 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

NATS cluster on top of Kubernetes made easy.

kubernetes-nats-cluster

NATS cluster on top of Kubernetes made easy.

THIS PROJECT HAS BEEN ARCHIVED. SEE https://github.com/nats-io/nats-operator

NOTE: This repository provides a configurable way to deploy secure, available and scalable NATS clusters. However, a smarter solution in on the way (see #5).

Pre-requisites

  • Kubernetes cluster v1.8+ - tested with v1.9.0 on top of Vagrant + CoreOS
  • At least 3 nodes available (see Pod anti-affinity)
  • kubectl configured to access your cluster master API Server
  • openssl for TLS certificate generation

Deploy

We will be deploying a cluster of 3 NATS instances, with the following set-up:

  • TLS on for clients, but not clustering because peer-auth requires real SANS DNS in certificate
  • NATS client credentials: nats_client_user:nats_client_pwd
  • NATS route/cluster credentials: nats_route_user:nats_route_pwd
  • Logging: debug:false, trace:true, logtime:true

First, make sure to change nats.conf according to your needs. Then create a Kubernetes configmap to store it:

kubectl create configmap nats-config --from-file nats.conf

Next, we need to generate valid TLS artifacts:

openssl genrsa -out ca-key.pem 2048
openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"
openssl genrsa -out nats-key.pem 2048
openssl req -new -key nats-key.pem -out nats.csr -subj "/CN=kube-nats" -config ssl.cnf
openssl x509 -req -in nats.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out nats.pem -days 3650 -extensions v3_req -extfile ssl.cnf

Then, it's time to create a couple Kubernetes secrets to store the TLS artifacts:

  • tls-nats-server for the NATS server TLS setup
  • tls-nats-client for NATS client apps setup - one will need it to validate the self-signed certificate used to secure NATS server
kubectl create secret generic tls-nats-server --from-file nats.pem --from-file nats-key.pem --from-file ca.pem
kubectl create secret generic tls-nats-client --from-file ca.pem

ATTENTION: Both using self-signed certificates and using the same certificates for securing client and cluster connections is a significant security compromise. But for the sake of showing how it can be done, I'm fine with doing just that. In an ideal scenario, there should be:

  • One centralized PKI/CA
  • One certificate for securing NATS route/cluster connections
  • One certificate for securing NATS client connections
  • TLS route/cluster authentication should be enforced, so one TLS certificate per route/cluster peer
  • TLS client authentication should be enforced, so one TLS certificate per client

And finally, we deploy NATS:

kubectl create -f nats.yml

Logs should be enough to make sure everything is working as expected:

$ kubectl logs -f nats-0
[1] 2017/12/17 12:38:37.801139 [INF] Starting nats-server version 1.0.4
[1] 2017/12/17 12:38:37.801449 [INF] Starting http monitor on 0.0.0.0:8222
[1] 2017/12/17 12:38:37.801580 [INF] Listening for client connections on 0.0.0.0:4242
[1] 2017/12/17 12:38:37.801772 [INF] TLS required for client connections
[1] 2017/12/17 12:38:37.801778 [INF] Server is ready
[1] 2017/12/17 12:38:37.802078 [INF] Listening for route connections on 0.0.0.0:6222
[1] 2017/12/17 12:38:38.874497 [TRC] 10.244.1.3:33494 - rid:1 - ->> [CONNECT {"verbose":false,"pedantic":false,"user":"nats_route_user","pass":"nats_route_pwd","tls_required":true,"name":"KGMPnL89We3gFLEjmp8S5J"}]
[1] 2017/12/17 12:38:38.956806 [TRC] 10.244.74.2:46018 - rid:3 - ->> [CONNECT {"verbose":false,"pedantic":false,"user":"nats_route_user","pass":"nats_route_pwd","tls_required":true,"name":"Skc5mx9enWrGPIQhyE7uzR"}]
[1] 2017/12/17 12:38:39.951160 [TRC] 10.244.1.4:46242 - rid:4 - ->> [CONNECT {"verbose":false,"pedantic":false,"user":"nats_route_user","pass":"nats_route_pwd","tls_required":true,"name":"0kaCfF3BU8g92snOe34251"}]
[1] 2017/12/17 12:40:38.956203 [TRC] 10.244.74.2:46018 - rid:3 - <<- [PING]
[1] 2017/12/17 12:40:38.958279 [TRC] 10.244.74.2:46018 - rid:3 - ->> [PING]
[1] 2017/12/17 12:40:38.958300 [TRC] 10.244.74.2:46018 - rid:3 - <<- [PONG]
[1] 2017/12/17 12:40:38.961791 [TRC] 10.244.74.2:46018 - rid:3 - ->> [PONG]
[1] 2017/12/17 12:40:39.951421 [TRC] 10.244.1.4:46242 - rid:4 - <<- [PING]
[1] 2017/12/17 12:40:39.952578 [TRC] 10.244.1.4:46242 - rid:4 - ->> [PONG]
[1] 2017/12/17 12:40:39.952594 [TRC] 10.244.1.4:46242 - rid:4 - ->> [PING]
[1] 2017/12/17 12:40:39.952598 [TRC] 10.244.1.4:46242 - rid:4 - <<- [PONG]

Scale

WARNING: Due to the Pod anti-affinity rule, for scaling up to n NATS instances, one needs n available Kubernetes nodes.

kubectl scale statefulsets nats --replicas 5

Did it work?

NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                      AGE
svc/kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP                      1h
svc/nats         ClusterIP   None         <none>        4222/TCP,6222/TCP,8222/TCP   4m

NAME        READY     STATUS    RESTARTS   AGE
po/nats-0   1/1       Running   0          4m
po/nats-1   1/1       Running   0          4m
po/nats-2   1/1       Running   0          4m
po/nats-3   1/1       Running   0          7s
po/nats-4   1/1       Running   0          6s

Access the service

Don't forget that services in Kubernetes are only acessible from containers in the cluster.

In this case, we're using a headless service.

Just point your client apps to:

nats:4222

Pod anti-affinity

One of the main advantages of running NATS on top of Kubernetes is how resilient the cluster becomes, particularly during node restarts. However if all NATS pods are scheduled onto the same node(s), this advantage decreases significantly and may even result in service downtime.

It is then highly recommended that one adopts pod anti-affinity in order to increase availability. This is enabled by default (see nats.yml).

More Repositories

1

kubernetes-elasticsearch-cluster

Elasticsearch cluster on top of Kubernetes made easy.
1,510
star
2

android-obd-reader

Android OBD-II Reader application that uses pure OBD-II PID's Java API.
Java
808
star
3

kubernetes-vagrant-coreos-cluster

Kubernetes cluster (for testing purposes) made easy with Vagrant and CoreOS.
Shell
597
star
4

obd-java-api

OBD-II Java API
Java
597
star
5

go-proxyproto

A Go library implementation of the PROXY protocol, versions 1 and 2.
Go
488
star
6

docker-elasticsearch-kubernetes

Ready to use Elasticsearch + Kubernetes discovery plug-in Docker image.
Dockerfile
224
star
7

docker-elasticsearch

Dockerfile for a base Elasticsearch image to be extended by others (allow to install plug-ins, change configuration, etc.)
Shell
161
star
8

kubernetes-elk-cluster

ELK (Elasticsearch + Logstash + Kibana) cluster on top of Kubernetes made easy.
Dockerfile
148
star
9

spring-boot-shiro-orientdb

A simple REST API built with Spring Boot and OrientDB (Object API) datastore, secured with Apache Shiro (distributed session storage powered by Hazelcast).
Java
98
star
10

obd-server

Webapp responsible for storing OBD (Android OBD Reader) readings.
Java
69
star
11

hibernate-postgres-jsonb

A working implementation of JSONB support on a Java + Hibernate application.
Java
63
star
12

hazelcast-kubernetes

Hazelcast clustering for Kubernetes made easy.
Dockerfile
49
star
13

simple-shiro-web-app

A simple proof-of-concept of Shiro authentication with JDBC Realm and MySQL.
CSS
36
star
14

nomad-vagrant-coreos-cluster

Nomad cluster (for testing purposes) made easy with Vagrant and CoreOS.
Shell
35
star
15

docker-jre

Lean JRE 8 Docker container
Dockerfile
24
star
16

hazelcast-kubernetes-bootstrapper

Hazelcast cluster discovery mechanism for Kubernetes.
Java
22
star
17

rethinkdb-coreos-cluster

RethinkDB clustering made easy with CoreOS, etcd2 and Docker
Shell
21
star
18

netty-tcnative-alpine

Build netty-tcnative native binaries for Alpine Linux.
Dockerfile
20
star
19

kubernetes-squid

Squid proxy for Kubernetes
15
star
20

apache-ignite-discovery-kubernetes

Apache Ignite discovery for Kubernetes.
Java
10
star
21

nats-coreos-cluster

NATS clustering made easy with CoreOS and etcd.
Shell
10
star
22

docker-logstash

Dockerfile for a base Logstash image to be extended by others (allow to install plug-ins, change configuration, etc.)
10
star
23

nats-sniffer

A simple sniffer for NATS, the cloud native messaging system. https://nats.io
Go
10
star
24

consul-lb-gce

A smart Google Cloud Engine load-balancer manager for Consul backed services.
Go
10
star
25

simple-jersey-rest-app

A simple proof-of-concept of RESTful web service implemented with Jersey and Glassfish.
Java
10
star
26

alpine-linux-build

Docker image of Alpine Linux with many compilers needed to build code that's meant to run on Alpine Linux.
9
star
27

fabric8-persistence-hibernate

Fabric8 + E-OSGi JPA 2.1 managed persistence (Aries + Hibernate 4.3.x) + REST service demonstration code.
Java
9
star
28

docker-elasticsearch-aws

Ready to use Elasticsearch + AWS plug-in Docker image.
Dockerfile
9
star
29

simple-jpa-hibernate-guice-desktop-app

A simple proof-of-concept of a desktop Java application with database access using JPA, Hibernate and Guice.
Java
9
star
30

fabric8-cxf-shiro

OSGi-enabled authentication & authorization service, powered by Apache Shiro and Hazelcast cluster. Goes really well with Fabric8 or JBoss Fuse for auto-scaling.
Java
8
star
31

docker-haproxy-ssl

5
star
32

docker-apache-ignite

Lean (310MB) Apache Ignite docker image.
5
star
33

docker-hbase-standalone

Simple HBase standalone container image.
4
star
34

pgskail

PostgreSQL high-availability made easy.
Go
4
star
35

docker-logstash-forwarder

logstash-forwarder minimal Docker container image
Shell
4
star
36

docker-elasticsearch-curator

Lean Elasticsearch Curator container image, based on Alpine Linux 3.7 and Python 3.
Dockerfile
4
star
37

docker-squid

Run Squid on a Docker container. The main purpose of this it to use it as Docker registry cache.
Shell
4
star
38

docker-apollomq

Docker image for Apache ActiveMQ Apollo broker.
3
star
39

dojo-go-consul

A repo for my experiments with Consul and Go.
Go
3
star
40

go-dojo-scalability-protocols

Go experiment with [Scalability Protocols](http://bravenewgeek.com/a-look-at-nanomsg-and-scalability-protocols/).
Go
3
star
41

sherlock

Message-driven, NoSQL-powered auditing framework
Java
3
star
42

fabric8-cxf-dosgi-demo-blueprint

Simple Fabric8-oriented CXF + DOSGi demonstration code.
Java
2
star
43

springboot-stomp-ws-jms-integration

Example of integration between two apps and one frontend with STOMP over WebSocket and JMS.
JavaScript
2
star
44

configo

TOML-based, strong-typed, environment-oriented, a la fullstack configuration for Go applications.
Go
2
star
45

kubernetes-operator-dev

Dockerfile
2
star
46

go-dojo-rectangles

Go experiments with rectangles.
Go
2
star
47

fabric8-amq-example

Fabric8 ActiveMQ (JMS) example with Pax-Exam integration-tests
Java
2
star
48

opensignals-android

BITalino OpenSignals application for Android plattform.
Java
2
star
49

netty-socketio-osgi-example

Netty-SocketIO OSGi example
Java
2
star
50

replicatorg-mavenized

Mavenized version of ReplicatorG project
Java
1
star
51

metricas

A pipeline for metrics acquisition and storage.
Go
1
star
52

simple-hibernate-kundera-cassandra-polyglot-app

Simple polyglot proof-of-concept with Hibernate (PostgreSQL) and Kundera (Cassandra) support.
Java
1
star
53

fabric8-osgi-aspectj

Fabric8 example with OSGi and AspectJ (AOP) aspects woven in compile-time.
Java
1
star
54

simple-kundera-embedded-cassandra-app

A simple proof-of-concept of a Java application with embedded Cassandra integration, using Kundera and Guice.
Java
1
star