• Stars
    star
    574
  • Rank 77,739 (Top 2 %)
  • Language
    Go
  • License
    Apache License 2.0
  • Created almost 8 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

NATS Operator

NATS Operator

⚠️ The recommended way of running NATS on Kubernetes is by using the Helm charts. If looking for JetStream support, this is supported in the Helm charts. The NATS Operator is not recommended to be used for new deployments.

License Apache 2.0 Build Status Version

NATS Operator manages NATS clusters atop Kubernetes using CRDs. If looking to run NATS on K8S without the operator you can also find Helm charts in the nats-io/k8s repo. You can also find more info about running NATS on Kubernetes in the docs as well as a minimal setup using StatefulSets only without using the operator to get started here.

Requirements

Introduction

NATS Operator provides a NatsCluster Custom Resources Definition (CRD) that models a NATS cluster. This CRD allows for specifying the desired size and version for a NATS cluster, as well as several other advanced options:

apiVersion: nats.io/v1alpha2
kind: NatsCluster
metadata:
  name: example-nats-cluster
spec:
  size: 3
  version: "2.1.8"

NATS Operator monitors creation/modification/deletion of NatsCluster resources and reacts by attempting to perform the any necessary operations on the associated NATS clusters in order to align their current status with the desired one.

Installing

NATS Operator supports two different operation modes:

  • Namespace-scoped (classic): NATS Operator manages NatsCluster resources on the Kubernetes namespace where it is deployed.
  • Cluster-scoped (experimental): NATS Operator manages NatsCluster resources across all namespaces in the Kubernetes cluster.

The operation mode must be chosen when installing NATS Operator and cannot be changed later.

Namespace-scoped installation

To perform a namespace-scoped installation of NATS Operator in the Kubernetes cluster pointed at by the current context, you may run:

$ kubectl apply -f https://github.com/nats-io/nats-operator/releases/latest/download/00-prereqs.yaml
$ kubectl apply -f https://github.com/nats-io/nats-operator/releases/latest/download/10-deployment.yaml

This will, by default, install NATS Operator in the default namespace and observe NatsCluster resources created in the default namespace, alone. In order to install in a different namespace, you must first create said namespace and edit the manifests above in order to specify its name wherever necessary.

WARNING: To perform multiple namespace-scoped installations of NATS Operator, you must manually edit the nats-operator-binding cluster role binding in deploy/00-prereqs.yaml file in order to add all the required service accounts. Failing to do so may cause all NATS Operator instances to malfunction.

WARNING: When performing a namespace-scoped installation of NATS Operator, you must make sure that all other namespace-scoped installations that may exist in the Kubernetes cluster share the same version. Installing different versions of NATS Operator in the same Kubernetes cluster may cause unexpected behavior as the schema of the CRDs which NATS Operator registers may change between versions.

Alternatively, you may use Helm to perform a namespace-scoped installation of NATS Operator. To do so you may go to helm/nats-operator and use the Helm charts found in that repo.

Cluster-scoped installation (experimental)

Cluster-scoped installations of NATS Operator must live in the nats-io namespace. This namespace must be created beforehand:

$ kubectl create ns nats-io

Then, you must manually edit the manifests in deployment/ in order to reference the nats-io namespace and to enable the ClusterScoped feature gate in the NATS Operator deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nats-operator
  namespace: nats-io
spec:
  (...)
    spec:
      containers:
      - name: nats-operator
        (...)
        args:
        - nats-operator
        - --feature-gates=ClusterScoped=true
        (...)

Once you have done this, you may install NATS Operator by running:

$ kubectl apply -f https://github.com/nats-io/nats-operator/releases/latest/download/00-prereqs.yaml
$ kubectl apply -f https://github.com/nats-io/nats-operator/releases/latest/download/10-deployment.yaml

WARNING: When performing a cluster-scoped installation of NATS Operator, you must make sure that there are no other deployments of NATS Operator in the Kubernetes cluster. If you have a previous installation of NATS Operator, you must uninstall it before performing a cluster-scoped installation of NATS Operator.

Creating a NATS cluster

Once NATS Operator has been installed, you will be able to confirm that two new CRDs have been registered in the cluster:

$ kubectl get crd
NAME                       CREATED AT
natsclusters.nats.io       2019-01-11T17:16:36Z
natsserviceroles.nats.io   2019-01-11T17:16:40Z

To create a NATS cluster, you must create a NatsCluster resource representing the desired status of the cluster. For example, to create a 3-node NATS cluster you may run:

$ cat <<EOF | kubectl create -f -
apiVersion: nats.io/v1alpha2
kind: NatsCluster
metadata:
  name: example-nats-cluster
spec:
  size: 3
  version: "1.3.0"
EOF

NATS Operator will react to the creation of such a resource by creating three NATS pods. These pods will keep being monitored (and replaced in case of failure) by NATS Operator for as long as this NatsCluster resource exists.

Listing NATS clusters

To list all the NATS clusters:

$ kubectl get nats --all-namespaces
NAMESPACE   NAME                   AGE
default     example-nats-cluster   2m

TLS support

By using a pair of opaque secrets (one for the clients and then another for the routes), it is possible to set TLS for the communication between the clients and also for the transport between the routes:

apiVersion: "nats.io/v1alpha2"
kind: "NatsCluster"
metadata:
  name: "nats"
spec:
  # Number of nodes in the cluster
  size: 3
  version: "1.3.0"

  tls:
    # Certificates to secure the NATS client connections:
    serverSecret: "nats-clients-tls"

    # Certificates to secure the routes.
    routesSecret: "nats-routes-tls"

In order for TLS to be properly established between the nodes, it is necessary to create a wildcard certificate that matches the subdomain created for the service from the clients and the one for the routes.

By default, the routesSecret has to provide the files: ca.pem, route-key.pem, route.pem, for the CA, server private and public key respectively.

$ kubectl create secret generic nats-routes-tls --from-file=ca.pem --from-file=route-key.pem --from-file=route.pem

Similarly, by default the serverSecret has to provide the files: ca.pem, server-key.pem, and server.pem for the CA, server private key and public key used to secure the connection with the clients.

$ kubectl create secret generic nats-clients-tls --from-file=ca.pem --from-file=server-key.pem --from-file=server.pem

Consider though that you may wish to independently manage the certificate authorities for routes between clusters, to support the ability to roll between CAs or their intermediates.

Any filename in the below can also be an absolute path, allowing you to mount a CA bundle in a place of your choosing.

NATS also supports kubernetes.io/tls secrets (like the ones managed by cert-manager) and any secrets containing a CA, private and public keys with arbitrary names. It is possible to overwrite the default names as follows:

apiVersion: "nats.io/v1alpha2"
kind: "NatsCluster"
metadata:
  name: "nats"
spec:
  # Number of nodes in the cluster
  size: 3
  version: "1.3.0"

  tls:
    # Certificates to secure the NATS client connections:
    serverSecret: "nats-clients-tls"
    # Name of the CA in serverSecret
    serverSecretCAFileName: "ca.crt"
    # Name of the key in serverSecret
    serverSecretKeyFileName: "tls.key"
    # Name of the certificate in serverSecret
    serverSecretCertFileName: "tls.crt"

    # Certificates to secure the routes.
    routesSecret: "nats-routes-tls"
    # Name of the CA, but not from this secret
    routesSecretCAFileName: "/etc/ca-bundle/routes-bundle.pem"
    # Name of the key in routesSecret
    routesSecretKeyFileName: "tls.key"
    # Name of the certificate in routesSecret
    routesSecretCertFileName: "tls.crt"

  template:
    spec:
      containers:
      - name: "nats"
        volumeMounts:
        - name: "ca-bundle"
          mountPath: "/etc/ca-bundle"
          readOnly: true
      volumes:
      - name: "ca-bundle"
        configMap:
          name: "our-ca-bundle"

Cert-Manager

If cert-manager is available in your cluster, you can easily generate TLS certificates for NATS as follows:

Create a self-signed cluster issuer (or namespace-bound issuer) to create NATS' CA certificate:

apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
  name: selfsigning
spec:
  selfSigned: {}

Create your NATS cluster's CA certificate using the new selfsigning issuer:

apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
  name: nats-ca
spec:
  secretName: nats-ca
  duration: 8736h # 1 year
  renewBefore: 240h # 10 days
  issuerRef:
    name: selfsigning
    kind: ClusterIssuer
  commonName: nats-ca
  usages: 
    - cert sign # workaround for odd cert-manager behavior
  organization:
  - Your organization
  isCA: true

Create your NATS cluster issuer based on the new nats-ca CA:

apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
  name: nats-ca
spec:
  ca:
    secretName: nats-ca

Create your NATS cluster's server certificate (assuming NATS is running in the nats-io namespace, otherwise, set the commonName and dnsNames fields appropriately):

apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
  name: nats-server-tls
spec:
  secretName: nats-server-tls
  duration: 2160h # 90 days
  renewBefore: 240h # 10 days
  usages:
  - signing
  - key encipherment
  - server auth
  issuerRef:
    name: nats-ca
    kind: Issuer
  organization:
  - Your organization
  commonName: nats.nats-io.svc.cluster.local
  dnsNames:
  - nats.nats-io.svc

Create your NATS cluster's routes certificate (assuming NATS is running in the nats-io namespace, otherwise, set the commonName and dnsNames fields appropriately):

apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
  name: nats-routes-tls
spec:
  secretName: nats-routes-tls
  duration: 2160h # 90 days
  renewBefore: 240h # 10 days
  usages:
  - signing
  - key encipherment
  - server auth
  - client auth # included because routes mutually verify each other
  issuerRef:
    name: nats-ca
    kind: Issuer
  organization:
  - Your organization
  commonName: "*.nats-mgmt.nats-io.svc.cluster.local"
  dnsNames:
  - "*.nats-mgmt.nats-io.svc"

Authorization

Using ServiceAccounts

⚠️ The ServiceAccounts uses a very rudimentary approach of config reloading and watching CRDs and advanced K8S APIs that may not be available in your cluster. Instead, the decentralized JWT approach should be preferred, to learn more: https://docs.nats.io/developing-with-nats/tutorials/jwt

The NATS Operator can define permissions based on Roles by using any present ServiceAccount in a namespace. This feature requires a Kubernetes v1.12+ cluster having the TokenRequest API enabled. To try this feature using minikube v0.30.0+, you can configure it to start as follows:

$ minikube start \
    --extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/sa.key \
    --extra-config=apiserver.service-account-key-file=/var/lib/minikube/certs/sa.pub \
    --extra-config=apiserver.service-account-issuer=api \
    --extra-config=apiserver.service-account-api-audiences=api,spire-server \
    --extra-config=apiserver.authorization-mode=Node,RBAC \
    --extra-config=kubelet.authentication-token-webhook=true

Please note that availability of this feature across Kubernetes offerings may vary widely.

ServiceAccounts integration can then be enabled by setting the enableServiceAccounts flag to true in the NatsCluster configuration.

apiVersion: nats.io/v1alpha2
kind: NatsCluster
metadata:
  name: example-nats
spec:
  size: 3
  version: "1.3.0"

  pod:
    # NOTE: Only supported in Kubernetes v1.12+.
    enableConfigReload: true
  auth:
    # NOTE: Only supported in Kubernetes v1.12+ clusters having the "TokenRequest" API enabled.
    enableServiceAccounts: true

Permissions for a ServiceAccount can be set by creating a NatsServiceRole for that account. In the example below, there are two accounts, one is an admin user that has more permissions.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nats-admin-user
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nats-user
---
apiVersion: nats.io/v1alpha2
kind: NatsServiceRole
metadata:
  name: nats-user
  namespace: nats-io

  # Specifies which NATS cluster will be mapping this account.
  labels:
    nats_cluster: example-nats
spec:
  permissions:
    publish: ["foo.*", "foo.bar.quux"]
    subscribe: ["foo.bar"]
---
apiVersion: nats.io/v1alpha2
kind: NatsServiceRole
metadata:
  name: nats-admin-user
  namespace: nats-io
  labels:
    nats_cluster: example-nats
spec:
  permissions:
    publish: [">"]
    subscribe: [">"]

The above will create two different Secrets which can then be mounted as volumes for a Pod.

$ kubectl -n nats-io get secrets
NAME                                       TYPE          DATA      AGE
...
nats-admin-user-example-nats-bound-token   Opaque        1         43m
nats-user-example-nats-bound-token         Opaque        1         43m

Please note that NatsServiceRole must be created in the same namespace as NatsCluster is running, but bound-token will be created for ServiceAccount resources that can be placed in various namespaces.

An example of mounting the secret in a Pod can be found below:

apiVersion: v1
kind: Pod
metadata:
  name: nats-user-pod
  labels:
    nats_cluster: example-nats
spec:
  volumes:
    - name: "token"
      projected:
        sources:
        - secret:
            name: "nats-user-example-nats-bound-token"
            items:
              - key: token
                path: "token"
  restartPolicy: Never
  containers:
    - name: nats-ops
      command: ["/bin/sh"]
      image: "wallyqs/nats-ops:latest"
      tty: true
      stdin: true
      stdinOnce: true
      volumeMounts:
      - name: "token"
        mountPath: "/var/run/secrets/nats.io"

Then within the Pod the token can be used to authenticate against the server using the created token.

$ kubectl -n nats-io attach -it nats-user-pod

/go # nats-sub -s nats://nats-user:`cat /var/run/secrets/nats.io/token`@example-nats:4222 hello.world
Listening on [hello.world]
^C
/go # nats-sub -s nats://nats-admin-user:`cat /var/run/secrets/nats.io/token`@example-nats:4222 hello.world
Can't connect: nats: authorization violation

Using a single secret with the explicit configuration.

Authorization can also be set for the server by using a secret where the permissions are defined in JSON:

{
  "users": [
    { "username": "user1", "password": "secret1" },
    { "username": "user2", "password": "secret2",
      "permissions": {
	"publish": ["hello.*"],
	"subscribe": ["hello.world"]
      }
    }
  ],
  "default_permissions": {
    "publish": ["SANDBOX.*"],
    "subscribe": ["PUBLIC.>"]
  }
}

Example of creating a secret to set the permissions:

kubectl create secret generic nats-clients-auth --from-file=clients-auth.json

Now when creating a NATS cluster it is possible to set the permissions as in the following example:

apiVersion: "nats.io/v1alpha2"
kind: "NatsCluster"
pmetadata:
  name: "example-nats-auth"
spec:
  size: 3
  version: "1.1.0"

  auth:
    # Definition in JSON of the users permissions
    clientsAuthSecret: "nats-clients-auth"

    # How long to wait for authentication
    clientsAuthTimeout: 5

Configuration Reload

On Kubernetes v1.12+ clusters it is possible to enable on-the-fly reloading of configuration for the servers that are part of the cluster. This can also be combined with the authorization support, so in case the user permissions change, then the servers will reload and apply the new permissions.

apiVersion: "nats.io/v1alpha2"
kind: "NatsCluster"
metadata:
  name: "example-nats-auth"
spec:
  size: 3
  version: "1.1.0"

  pod:
    # Enable on-the-fly NATS Server config reload
    # NOTE: Only supported in Kubernetes v1.12+.
    enableConfigReload: true

    # Possible to customize version of reloader image
    reloaderImage: connecteverything/nats-server-config-reloader
    reloaderImageTag: "0.2.2-v1alpha2"
    reloaderImagePullPolicy: "IfNotPresent"
  auth:
    # Definition in JSON of the users permissions
    clientsAuthSecret: "nats-clients-auth"

    # How long to wait for authentication
    clientsAuthTimeout: 5

Connecting operated NATS clusters to external NATS clusters

By using the extraRoutes field on the spec you can make the operated NATS cluster create routes against clusters outside of Kubernetes:

apiVersion: "nats.io/v1alpha2"
kind: "NatsCluster"
metadata:
  name: "nats"
spec:
  size: 3
  version: "1.4.1"

  extraRoutes:
    - route: "nats://nats-a.example.com:6222"
    - route: "nats://nats-b.example.com:6222"
    - route: "nats://nats-c.example.com:6222"

It is also possible to connect to another operated NATS cluster as follows:

apiVersion: "nats.io/v1alpha2"
kind: "NatsCluster"
metadata:
  name: "nats-v2-2"
spec:
  size: 3
  version: "1.4.1"

  extraRoutes:
    - cluster: "nats-v2-1"

Resolvers

The operator only supports the URL() resolver, see example/example-super-cluster.yaml

Development

Building the Docker Image

To build the nats-operator Docker image:

$ docker build -f docker/operator/Dockerfile . -t <image:tag>

To build the nats-server-config-reloader:

$ docker build -f docker/reloader/Dockerfile . -t <image:tag>

You'll need Docker 17.06.0-ce or higher.

More Repositories

1

nats-server

High-Performance server for NATS.io, the cloud and edge native messaging system.
Go
15,450
star
2

nats.go

Golang client for NATS, the cloud native messaging system.
Go
5,504
star
3

nats-streaming-server

NATS Streaming System Server
Go
2,511
star
4

nats.node

Node.js client for NATS, the cloud native messaging system.
JavaScript
1,542
star
5

nats.rs

Rust client for NATS, the cloud native messaging system.
Rust
1,042
star
6

nats.rb

Ruby client for NATS, the cloud native messaging system.
Ruby
881
star
7

nats.py

Python3 client for NATS
Python
860
star
8

stan.go

NATS Streaming System
Go
706
star
9

nats.net.v1

The official C# Client for NATS
C#
646
star
10

nats.java

Java client for NATS
Java
568
star
11

natscli

The NATS Command Line Interface
Go
468
star
12

jetstream

JetStream Utilities
Dockerfile
454
star
13

k8s

NATS on Kubernetes with Helm Charts
Go
445
star
14

nats.c

A C client for NATS
C
389
star
15

prometheus-nats-exporter

A Prometheus exporter for NATS metrics
Go
367
star
16

nuid

NATS Unique Identifiers
Go
361
star
17

nats-top

A top-like tool for monitoring NATS servers.
Go
353
star
18

nats.ws

WebSocket NATS
JavaScript
316
star
19

stan.js

Node.js client for NATS Streaming
JavaScript
293
star
20

nats.net

Full Async C# / .NET client for NATS
C#
252
star
21

nats-surveyor

NATS Monitoring, Simplified.
Go
216
star
22

nats.ex

Elixir client for NATS, the cloud native messaging system. https://nats.io
Elixir
203
star
23

nats-architecture-and-design

Architecture and Design Docs
Go
195
star
24

graft

A RAFT Election implementation in Go.
Go
178
star
25

nats.ts

TypeScript Node.js client for NATS, the cloud native messaging system
TypeScript
178
star
26

nats-streaming-operator

NATS Streaming Operator
Go
174
star
27

nats.deno

Deno client for NATS, the cloud native messaging system
TypeScript
158
star
28

nack

NATS Controllers for Kubernetes (NACK)
Go
154
star
29

jsm.go

JetStream Management Library for Golang
Go
149
star
30

stan.net

The official NATS .NET C# Streaming Client
C#
138
star
31

nats-docker

Official Docker image for the NATS server
Dockerfile
133
star
32

nkeys

NATS Keys
Go
129
star
33

nats-kafka

NATS to Kafka Bridging
Go
128
star
34

nats-pure.rb

Ruby client for NATS, the cloud native messaging system.
Ruby
127
star
35

nats.zig

Zig Client for NATS
124
star
36

stan.py

Python Asyncio NATS Streaming Client
Python
113
star
37

nats-box

A container with NATS utilities
HCL
112
star
38

go-nats-examples

Single repository for go-nats example code. This includes all documentation examples and any common message pattern examples.
Go
109
star
39

nats.docs

NATS.io Documentation on Gitbook
HTML
109
star
40

nginx-nats

NGINX client module for NATS, the cloud native messaging system.
C
108
star
41

not.go

A reference for distributed tracing with the NATS Go client.
Go
97
star
42

nsc

Tool for creating nkey/jwt based configurations
Go
96
star
43

stan.java

NATS Streaming Java Client
Java
93
star
44

nats-site

Website content for https://nats.io. For technical issues with NATS products, please log an issue in the proper repository.
Markdown
91
star
45

jparse

Small, Fast, Compliant JSON parser that uses events parsing and index overlay
Java
89
star
46

nats-account-server

A simple HTTP/NATS server to host JWTs for nats-server 2.0 account authentication.
Go
77
star
47

jwt

JWT tokens signed using NKeys for Ed25519 for the NATS ecosystem.
Go
77
star
48

elixir-nats

Elixir NATS client
Elixir
76
star
49

nats-general

General NATS Information
63
star
50

nats.py2

A Tornado based Python 2 client for NATS
Python
62
star
51

spring-nats

A Spring Cloud Stream Binder for NATS
Java
59
star
52

terraform-provider-jetstream

Terraform Provider to manage NATS JetStream
Go
54
star
53

nats-streaming-docker

Official Docker image for the NATS Streaming server
Python
45
star
54

nats.cr

Crystal client for NATS
Crystal
44
star
55

nats-rest-config-proxy

NATS REST Configuration Proxy
Go
34
star
56

java-nats-examples

Repo for java-nats-examples
Java
33
star
57

nats-connector-framework

A pluggable service to bridge NATS with other technologies
Java
33
star
58

demo-minio-nats

Demo of syncing across clouds with minio
Go
27
star
59

asyncio-nats-examples

Repo for Python Asyncio examples
Python
26
star
60

jetstream-leaf-nodes-demo

Go
25
star
61

stan.rb

Ruby NATS Streaming Client
Ruby
21
star
62

nats.swift

Swift client for NATS, the cloud native messaging system.
Swift
21
star
63

nats-mq

Simple bridge between NATS streaming and MQ Series
Go
21
star
64

nats-replicator

Bridge to replicate NATS Subjects or Channels to NATS Subject or Channels
Go
20
star
65

go-nats

[ARCHIVED] Golang client for NATS, the cloud native messaging system.
Go
20
star
66

nats-on-a-log

Raft log replication using NATS.
Go
20
star
67

nkeys.js

NKeys for JavaScript - Node.js, Browsers, and Deno.
TypeScript
19
star
68

latency-tests

Latency and Throughput Test Framework
HCL
17
star
69

nats.js

TypeScript
16
star
70

nkeys.py

NATS Keys for Python
Python
12
star
71

nats-jms-bridge

NATS to JMS Bridge for request/reply
Java
12
star
72

nats-connector-redis

A Redis Publish/Subscribe NATS Connector
Java
12
star
73

nats-java-vertx-client

Java
11
star
74

nuid.js

A Node.js implementation of NUID
TypeScript
10
star
75

sublist

History of the original sublist
Go
9
star
76

kotlin-nats-examples

Repo for Kotlin Nats examples.
Kotlin
8
star
77

nats-siddhi-demo

A NATS with Siddhi Event Processing Reference Architecture
8
star
78

jetstream-gh-action

Collection of JetStream related Actions for GitHub Actions
Go
8
star
79

node-nats-examples

Documentation samples for node-nats
JavaScript
8
star
80

nats-spark-connector

Scala
8
star
81

java-nats-server-runner

Run the Nats Server From your Java code.
Java
7
star
82

kubecon2020

Go
7
star
83

jwt.js

JWT tokens signed using nkeys for Ed25519 for the NATS JavaScript ecosystem
TypeScript
6
star
84

go-nats-streaming

[ARCHIVED] NATS Streaming System
Go
6
star
85

ts-nats-examples

typescript nats examples
TypeScript
5
star
86

js-nuid

TypeScript
5
star
87

integration-tests

Repository for integration test suites of any language
Java
5
star
88

homebrew-nats-tools

Repository hosting homebrew taps for nats-io tools
Ruby
5
star
89

ts-nkeys

A public-key signature system based on Ed25519 for the NATS ecosystem in typescript for ts-nats and node-nats
TypeScript
4
star
90

nats-steampipe-plugin

Example steampipe plugin for NATS
Go
4
star
91

kinesis-bridge

Bridge Amazon Kinesis to NATS streams.
Go
4
star
92

nkeys.rb

NATS Keys for Ruby
Ruby
4
star
93

nkeys.net

NATS Keys for .NET
C#
4
star
94

advisories

Advisories related to the NATS project
HTML
2
star
95

not.java

A reference for distributed tracing with the NATS Java client.
Java
2
star
96

deploy

Deployment for NATS
Ruby
2
star
97

nats.c.deps

C
2
star
98

stan2js

NATS Streaming to JetStream data migration tool.
Go
2
star
99

jwt.net

JWT tokens signed using NKeys for Ed25519 for NATS .NET
C#
2
star
100

netlify-slack

Trivial redirector website
1
star