• Stars
    star
    710
  • Rank 63,751 (Top 2 %)
  • Language
    Rust
  • License
    MIT License
  • Created about 5 years ago
  • Updated almost 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Single dependency Kubernetes clusters for local testing, experimenting and development

CircleCI Docs master Docs release Coverage Dependencies Crates.io License MIT

Kubernetes development cluster bootstrapping with Nix packages

This project aims to provide single dependency Kubernetes clusters for local testing, experimenting and development purposes.

Moving pictures are worth more than thousand words, so here is a short demo:

demo

Nix?

Have you ever heard about Nix, the functional package manager?

In case you haven't, donโ€™t worry โ€“ the important thing is that it provides all the third-party dependencies needed for this project, pinned to a dedicated version. This guarantees stable, reproducible installations.

KuberNix itself is a Rusty helper program, which takes care of bootstrapping the Kubernetes cluster, passing the right configuration parameters around and keeping track of the running processes.

What is inside

The following technology stack is currently being used:

Application Version
cfssl v1.5.0
cni-plugins v0.9.0
conmon v2.0.25
conntrack-tools v1.4.6
cri-o-wrapper v1.20.0
cri-tools v1.20.0
etcd v3.3.25
iproute2 v5.10.0
iptables v1.8.6
kmod v27
kubectl v1.19.5
kubernetes v1.19.5
nss-cacert v3.60
podman-wrapper v2.2.1
runc v1.0.0-rc92
socat v1.7.4.1
sysctl v1003.1.2008
util-linux v2.36.1

Some other tools are not explicitly mentioned here, because they are no first-level dependencies.

Single Dependency

With Nix

As already mentioned, there is only one single dependency needed to run this project: Nix. To setup Nix, simply run:

$ curl https://nixos.org/nix/install | sh

Please make sure to follow the instructions output by the script.

With the Container Runtime of your Choice

It is also possible to run KuberNix in the container runtime of your choice. To do this, simply grab the latest image from saschagrunert/kubernix. Please note that running KuberNix inside a container image requires to run privileged mode and host networking. For example, we can run KuberNix with podman like this:

$ sudo podman run \
    --net=host \
    --privileged \
    -it docker.io/saschagrunert/kubernix:latest

Getting Started

Cluster Bootstrap

To bootstrap your first cluster, download one of the latest release binaries or build the application via:

$ make build-release

The binary should now be available in the target/release/kubernix directory of the project. Alternatively, install the application via cargo install kubernix.

After the successful binary retrieval, start KuberNix by running it as root:

$ sudo kubernix

KuberNix will now take care that the Nix environment gets correctly setup, downloads the needed binaries and starts the cluster. Per default it will create a directory called kubernix-run in the current path which contains all necessary data for the cluster.

Shell Environment

If everything went fine, you should be dropped into a new shell session, like this:

[INFO ] Everything is up and running
[INFO ] Spawning interactive shell
[INFO ] Please be aware that the cluster stops if you exit the shell
>

Now you can access your cluster via tools like kubectl:

> kubectl get pods --all-namespaces
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   coredns-85d84dd694-xz997   1/1     Running   0          102s

All configuration files have been written to the target directory, which is now the current one:

> ls -1
apiserver/
controllermanager/
coredns/
crio/
encryptionconfig/
etcd/
kubeconfig/
kubelet/
kubernix.env
kubernix.toml
nix/
pki/
policy.json
proxy/
scheduler/

For example, the log files for the different running components are now available within their corresponding directory:

> ls -1 **.log
apiserver/kube-apiserver.log
controllermanager/kube-controller-manager.log
crio/crio.log
etcd/etcd.log
kubelet/kubelet.log
proxy/kube-proxy.log
scheduler/kube-scheduler.log

If you want to spawn an additional shell session, simply run kubernix shell in the same directory as where the initial bootstrap happened.

$ sudo kubernix shell
[INFO  kubernix] Spawning new kubernix shell in 'kubernix-run'
> kubectl run --generator=run-pod/v1 --image=alpine -it alpine sh
If you don't see a command prompt, try pressing enter.
/ #

This means that you can spawn as many shells as you want to.

Cleanup

The whole cluster gets automatically destroyed if you exit the shell session from the initial process:

> exit
[INFO ] Cleaning up

Please note that the directory where all the data is stored is not being removed after the exit of KuberNix. This means that youโ€™re still able to access the log and configuration files for further processing. If you start the cluster again, then the cluster files will be reused. This is especially handy if you want to test configuration changes.

Restart

If you start KuberNix again in the same run directory, then it will re-use the configuration during the cluster bootstrapping process. This means that you can modify all data inside the run root for testing and debugging purposes. The startup of the individual components will be initiated by YAML files called run.yml, which are available inside the directories of the corresponding components. For example, etc gets started via:

> cat kubernix-run/etcd/run.yml
---
command: /nix/store/qlbsv0hvi0j5qj3631dzl9srl75finlk-etcd-3.3.13-bin/bin/etcd
args:
  - "--advertise-client-urls=https://127.0.0.1:2379"
  - "--client-cert-auth"
  - "--data-dir=/โ€ฆ/kubernix-run/etcd/run"
  - "--initial-advertise-peer-urls=https://127.0.0.1:2380"
  - "--initial-cluster-state=new"
  - "--initial-cluster-token=etcd-cluster"
  - "--initial-cluster=etcd=https://127.0.0.1:2380"
  - "--listen-client-urls=https://127.0.0.1:2379"
  - "--listen-peer-urls=https://127.0.0.1:2380"
  - "--name=etcd"
  - "--peer-client-cert-auth"
  - "--cert-file=/โ€ฆ/kubernix-run/pki/kubernetes.pem"
  - "--key-file=/โ€ฆ/kubernix-run/pki/kubernetes-key.pem"
  - "--peer-cert-file=/โ€ฆ/kubernix-run/pki/kubernetes.pem"
  - "--peer-key-file=/โ€ฆ/kubernix-run/pki/kubernetes-key.pem"
  - "--peer-trusted-ca-file=/โ€ฆ/kubernix-run/pki/ca.pem"
  - "--trusted-ca-file=/โ€ฆ/kubernix-run/pki/ca.pem"

Configuration

KuberNix has some configuration possibilities, which are currently:

CLI argument Description Default Environment Variable
-r, --root Path where all the runtime data is stored kubernix-run KUBERNIX_ROOT
-l, --log-level Logging verbosity info KUBERNIX_LOG_LEVEL
-c, --cidr CIDR used for the cluster network 10.10.0.0/16 KUBERNIX_CIDR
-s, --shell The shell executable to be used $SHELL/sh KUBERNIX_SHELL
-e, --no-shell Do not spawn an interactive shell after bootstrap false KUBERNIX_NO_SHELL
-n, --nodes The number of nodes to be registered 1 KUBERNIX_NODES
-u, --container-runtime The container runtime to be used for the nodes, irrelevant if nodes equals to 1 podman KUBERNIX_CONTAINER_RUNTIME
-o, --overlay Nix package overlay to be used KUBERNIX_OVERLAY
-p, --packages Additional Nix dependencies to be added to the environment KUBERNIX_PACKAGES

Please ensure that the CIDR is not overlapping with existing local networks and that your setup has access to the internet. The CIDR will be automatically split up over the necessary cluster components.

Multinode Support

It is possible to spawn multiple worker nodes, too. To do this, simply adjust the -n, --nodes command line argument as well as your preferred container runtime via -u, --container-runtime. The default runtime is podman, but every other Docker drop-in replacement should work out of the box.

Overlays

Overlays provide a method to extend and change Nix derivations. This means, that weโ€™re able to change dependencies during the cluster bootstrapping process. For example, we can exchange the used CRI-O version to use a local checkout by writing this simple overlay.nix:

self: super: {
  cri-o = super.cri-o.overrideAttrs(old: {
    src = ../path/to/go/src/github.com/cri-o/cri-o;
  });
}

Now we can run KuberNix with the --overlay, -o command line argument:

$ sudo kubernix --overlay overlay.nix
[INFO  kubernix] Nix environment not found, bootstrapping one
[INFO  kubernix] Using custom overlay 'overlay.nix'
these derivations will be built:
  /nix/store/9jb43i2mqjc94mbx30d9nrx529w6lngw-cri-o-1.15.2.drv
  building '/nix/store/9jb43i2mqjc94mbx30d9nrx529w6lngw-cri-o-1.15.2.drv'...

Using this technique makes it easy for daily development of Kubernetes components, by simply changing it to local paths or trying out new versions.

Additional Packages

It is also possible to add additional packages to the KuberNix environment by specifying them via the --packages, -p command line parameter. This way you can easily utilize additional tools in a reproducible way. For example, when to comes to using always the same Helm version, you could simply run:

$ sudo kubernix -p kubernetes-helm
[INFO ] Nix environment not found, bootstrapping one
[INFO ] Bootstrapping cluster inside nix environment
โ€ฆ
> helm init
> helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}

All available packages are listed on the official Nix index.

Contributing

You want to contribute to this project? Wow, thanks! So please just fork it and send me a pull request.

More Repositories

1

webapp.rs

A web application completely written in Rust. ๐ŸŒ
Rust
2,160
star
2

demystifying-containers

A series of blog posts and talks about the world of containers ๐Ÿ“ฆ
Python
721
star
3

git-journal

The Git Commit Message and Changelog Generation Framework ๐Ÿ“–
Rust
564
star
4

indextree

Arena based tree ๐ŸŒฒ structure by using indices instead of reference counted pointers
Rust
533
star
5

demo

A framework for performing live pre-recorded command line demos in the wild ๐Ÿ“ผ
Go
272
star
6

nn

A tiny neural network ๐Ÿง 
Haskell
121
star
7

rain

Visualize vertical data inside your terminal ๐Ÿ’ฆ
Rust
88
star
8

ccli

Command line parsing in go, with coloring support ๐ŸŒˆ
Go
83
star
9

func

Functional additions to C
C++
55
star
10

dotfiles

My hand crafted .dotfiles ๐Ÿคš๐Ÿ› โค๏ธ
Shell
48
star
11

performabot

Continuous performance analysis reports for software projects ๐Ÿค–
Haskell
41
star
12

kmod

A Linux kernel module written in Rust
Rust
34
star
13

craft

Cargo inspired build system for C based projects
Rust
32
star
14

kubeflow-data-science-on-steroids

The blog post about Kubeflow, including all materials
Jupyter Notebook
30
star
15

go-modiff

Command line tool for diffing go module dependency changes between versions ๐Ÿ“”
Go
29
star
16

yew-router

Router extension to yew
Rust
27
star
17

peel-ip

Packet parsing for the Internet Protocol Suite ๐Ÿ“ฆ
Rust
26
star
18

microservice-rs

Microservice template using Rust and Cap'n Proto RPCs.
Rust
26
star
19

peel

Dynamic packet parsing within trees ๐ŸŒฒ๐ŸŒณ๐ŸŒด
Rust
22
star
20

fosdem20

Demo material used for the Podman talk at FOSDEM 2020
Go
22
star
21

stm32h7-rs

Rust on STM32H7 Microcontrollers
Rust
14
star
22

syscall-recorder

ebpf syscall recording demo project
C
9
star
23

unibar

A GPU accelerated status bar written in Rust ๐Ÿฆ„
Rust
8
star
24

seer

A collaborative resource planning tool ๐Ÿ”ฎ
Haskell
5
star
25

fastcmp

A fast byte slice comparison library
Rust
5
star
26

path

IP based connection identification and tracing ๐Ÿ‘Ÿ
Rust
5
star
27

webapp.hs

Haskell
4
star
28

umask-observe

umask observability based on bpftrace for OpenShift nodes
Shell
3
star
29

mowl

My own little logger โœ๏ธ
Rust
3
star
30

crio-demos

CRI-O Demonstration Material
Go
3
star
31

failure

Pure and type driven error handling for Haskell
Haskell
2
star
32

build-rust

The Docker based Rust build toolchain
Dockerfile
2
star
33

pidwatch-rs

C
2
star
34

rapidc

Rust
2
star
35

kubeflow-notebook-gpu

Kubeflow GPU Notebook Container Image
Dockerfile
2
star
36

pinns.rs

A simple utility to pin Linux namespaces
Rust
2
star
37

backingFsBlockDev

Go
1
star
38

peeler

Peel your network traffic
Rust
1
star
39

tunneldevice

Playing around with tunnel devices in Rust
Rust
1
star
40

build-haskell

The Docker based Haskell build toolchain
Dockerfile
1
star
41

go-docgen

A markdown and man page documentation generator for go applications
Go
1
star
42

netfilter_kmod

Playing around with the netfilter within the kernel
C
1
star
43

ci

Haskell
1
star
44

netlink_kmod

Playing around with routing netlinks inside the kernel
C
1
star
45

seccomp-oci-artifact-demo

Go
1
star
46

byopki

Bring Your Own Public Key Infrastructure for container signing and verification demo
Shell
1
star