• This repository has been archived on 21/Jan/2021
  • Stars
    star
    181
  • Rank 212,062 (Top 5 %)
  • Language Open Policy Agent
  • License
    Apache License 2.0
  • Created over 5 years ago
  • Updated almost 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Flux v1: Manage a multi-tenant cluster with Flux and Kustomize

fluxcd-multi-tenancy

We are moving to Flux v2

⚠️ Please note: In preparation of Flux v2 GA this repository with Flux v1 examples has been archived. The Flux v2 equivalent of what is shown here can be found at flux2-multi-tenancy.

Thanks a lot for your interest.

For posterity

This repository serves as a starting point for a multi-tenant cluster managed with Git, Flux and Kustomize.

I'm assuming that a multi-tenant cluster is shared by multiple teams. The cluster wide operations are performed by the cluster administrators while the namespace scoped operations are performed by various teams each with its own Git repository. That means a team member, that's not a cluster admin, can't create namespaces, custom resources definitions or change something in another team namespace.

Flux multi-tenancy

Repositories

First you'll have to create two git repositories:

Team Namespace Git Repository Flux RBAC
ADMIN all org/dev-cluster Cluster wide e.g. namespaces, CRDs, Flux controllers
DEV-TEAM1 team1 org/dev-team1 Namespace scoped e.g. deployments, custom resources
DEV-TEAM2 team2 org/dev-team2 Namespace scoped e.g. ingress, services, network policies

Cluster admin repository structure:

β”œβ”€β”€ .flux.yaml 
β”œβ”€β”€ base
β”‚Β Β  β”œβ”€β”€ flux
β”‚Β Β  └── memcached
β”œβ”€β”€ cluster
β”‚Β Β  β”œβ”€β”€ common
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ crds.yaml
β”‚Β Β  β”‚Β Β  └── kustomization.yaml
β”‚Β Β  └── team1
β”‚Β Β      β”œβ”€β”€ flux-patch.yaml
β”‚Β Β      β”œβ”€β”€ kubeconfig.yaml
β”‚Β Β      β”œβ”€β”€ kustomization.yaml
β”‚Β Β      β”œβ”€β”€ namespace.yaml
β”‚Β Β      β”œβ”€β”€ psp.yaml
β”‚Β Β      └── rbac.yaml
β”œβ”€β”€ install
└── scripts

The base folder holds the deployment spec used for installing Flux in the flux-system namespace and in the teams namespaces. All Flux instances share the same Memcached server deployed at install time in flux-system namespace.

With .flux.yaml we configure Flux to run Kustomize build on the cluster dir and deploy the generated manifests:

version: 1
commandUpdated:
  generators:
    - command: kustomize build .

Development team1 repository structure:

β”œβ”€β”€ .flux.yaml 
β”œβ”€β”€ flux-patch.yaml
β”œβ”€β”€ kustomization.yaml
└── workloads
    β”œβ”€β”€ frontend
    β”‚Β Β  β”œβ”€β”€ deployment.yaml
    β”‚Β Β  β”œβ”€β”€ kustomization.yaml
    β”‚Β Β  └── service.yaml
    └── backend
        β”œβ”€β”€ deployment.yaml
        β”œβ”€β”€ kustomization.yaml
        └── service.yaml

The workloads folder contains the desired state of the team1 namespace and the flux-patch.yaml contains the Flux annotations that define how the container images should be updated.

With .flux.yaml we configure Flux to run Kustomize build, apply the container update policies and deploy the generated manifests:

version: 1
patchUpdated:
  generators:
    - command: kustomize build .
  patchFile: flux-patch.yaml

Install the cluster admin Flux

In the dev-cluster repo, change the git URL to point to your fork:

vim ./install/flux-patch.yaml

[email protected]:org/dev-cluster

Install the cluster wide Flux with kubectl kustomize:

kubectl apply -k ./install/

Get the public SSH key with:

fluxctl --k8s-fwd-ns=flux-system identity

Add the public key to the github.com:org/dev-cluster repository deploy keys with write access.

The cluster wide Flux will do the following:

  • creates the cluster objects from cluster/common directory (CRDs, cluster roles, etc)
  • creates the team1 namespace and deploys a Flux instance with restricted access to that namespace

Install a Flux per team

Change the dev team1 git URL:

vim ./cluster/team1/flux-patch.yaml

[email protected]:org/dev-team1

When you commit your changes, the system Flux will configure the team1's Flux to sync with org/dev-team1 repository.

Get the public SSH key for team1 with:

fluxctl --k8s-fwd-ns=team1 identity

Add the public key to the github.com:org/dev-team1 deploy keys with write access. The team1's Flux will apply the manifests from org/dev-team1 repository only in the team1 namespace, this is enforced with RBAC and role bindings.

If team1 needs to deploy a controller that depends on a CRD or a cluster role, they'll have to open a PR in the org/dev-clusterrepository and add those cluster wide objects in the cluster/common directory.

The team1's Flux instance can be customised with different options than the system Flux using the cluster/team1/flux-patch.yaml.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: flux
spec:
  template:
    spec:
      containers:
        - name: flux
          args:
            - --manifest-generation=true
            - --memcached-hostname=flux-memcached.flux-system
            - --memcached-service=
            - --git-poll-interval=5m
            - --sync-interval=5m
            - --ssh-keygen-dir=/var/fluxd/keygen
            - --k8s-allow-namespace=team1
            - [email protected]:org/dev-team1
            - --git-branch=master

The k8s-allow-namespace restricts Flux discovery mechanism to a single namespace.

Install Flagger

Flagger is a progressive delivery Kubernetes operator that can be used to automate Canary, A/B testing and Blue/Green deployments.

Flux Flagger

You can deploy Flagger by including its manifests in the cluster/kustomization.yaml file:

bases:
  - ./flagger/
  - ./common/
  - ./team1/

Commit the changes to git and wait for system Flux to install Flagger and Prometheus:

fluxctl --k8s-fwd-ns=flux-system sync

kubectl -n flagger-system get po
NAME                                  READY   STATUS
flagger-64c6945d5b-4zgvh              1/1     Running
flagger-prometheus-6f6b558b7c-22kw5   1/1     Running

A team member can now push canary objects to org/dev-team1 repository and Flagger will automate the deployment process.

Flagger can notify your teams when a canary deployment has been initialised, when a new revision has been detected and if the canary analysis failed or succeeded.

You can enable Slack notifications by editing the cluster/flagger/flagger-patch.yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: flagger
spec:
  template:
    spec:
      containers:
        - name: flagger
          args:
            - -mesh-provider=kubernetes
            - -metrics-server=http://flagger-prometheus:9090
            - -slack-user=flagger
            - -slack-channel=alerts
            - -slack-url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK

Enforce pod security policies per team

With pod security policies a cluster admin can define a set of conditions that a pod must run with in order to be accepted into the system.

For example you can forbid a team from creating privileged containers or use the host network.

Edit the team1 pod security policy cluster/team1/psp.yaml:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: default-psp-team1
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
  privileged: false
  hostIPC: false
  hostNetwork: false
  hostPID: false
  allowPrivilegeEscalation: false
  allowedCapabilities:
    - '*'
  fsGroup:
    rule: RunAsAny
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
    - '*'

Set privileged, hostIPC, hostNetwork and hostPID to false and commit the change to git. From this moment on, team1 will not be able to run containers with an elevated security context under the default service account.

If a team member adds a privileged container definition in the org/dev-team1 repository, Kubernetes will deny it:

kubectl -n team1 describe replicasets podinfo-5d7d9fc9d5

Error creating: pods "podinfo-5d7d9fc9d5-" is forbidden: unable to validate against any pod security policy:
[spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]

Enforce custom policies per team

Gatekeeper is a validating webhook that enforces CRD-based policies executed by Open Policy Agent.

Flux Gatekeeper

You can deploy Gatekeeper by including its manifests in the cluster/kustomization.yaml file:

bases:
  - ./gatekeeper/
  - ./flagger/
  - ./common/
  - ./team1/

Inside the gatekeeper dir there is a constraint template that instructs OPA to reject Kubernetes deployments if no container resources are specified.

Enable the constraint for team1 by editing the cluster/gatekeeper/constraints.yaml file:

apiVersion: constraints.gatekeeper.sh/v1alpha1
kind: ContainerResources
metadata:
  name: containerresources
spec:
  match:
    namespaces:
      - team1
    kinds:
      - apiGroups: ["apps"]
        kinds: ["Deployment"]

Commit the changes to git and wait for system Flux to install Gatekeeper and apply the constraints:

fluxctl --k8s-fwd-ns=flux-system sync

watch kubectl -n gatekeeper-system get po

If a team member adds a deployment without CPU or memory resources in the org/dev-team1 repository, Gatekeeper will deny it:

kubectl -n team1 logs deploy/flux

admission webhook "validation.gatekeeper.sh" denied the request: 
[denied by containerresources] container <podinfo> has no memory requests
[denied by containerresources] container <sidecar> has no memory limits

Add a new team/namespace/repository

If you want to add another team to the cluster, first create a git repository as github.com:org/dev-team2.

Run the create team script:

./scripts/create-team.sh team2

team2 created at cluster/team2/
team2 added to cluster/kustomization.yaml

Change the git URL in cluster/team2 dir:

vim ./cluster/team2/flux-patch.yaml

[email protected]:org/dev-team2

Push the changes to the master branch of org/dev-cluster and sync with the cluster:

fluxctl --k8s-fwd-ns=flux-system sync

Get the team2 public SSH key with:

fluxctl --k8s-fwd-ns=team2 identity

Add the public key to the github.com:org/dev-team2 repository deploy keys with write access. The team2's Flux will apply the manifests from org/dev-team2 repository only in the team2 namespace.

Isolate tenants

With this setup, Flux will prevent a team member from altering cluster level objects or other team's workloads.

In order to harden the tenant isolation, a cluster admin should consider using:

  • resource quotas (limit the compute resources that can be requested by a team)
  • network policies (restrict cross namespace traffic)
  • pod security policies (prevent running privileged containers or host network and filesystem usage)
  • Open Policy Agent admission controller (enforce custom policies on Kubernetes objects)

Getting Help

If you have any questions about Flux and GitOps:

More Repositories

1

flux

Successor: https://github.com/fluxcd/flux2
Go
6,897
star
2

flux2

Open and extensible continuous delivery solution for Kubernetes. Powered by GitOps Toolkit.
Go
5,085
star
3

flagger

Progressive delivery Kubernetes operator (Canary, A/B Testing and Blue/Green deployments)
Go
4,878
star
4

flux2-kustomize-helm-example

A GitOps workflow example for multi-env deployments with Flux, Kustomize and Helm.
Shell
694
star
5

helm-operator

Successor: https://github.com/fluxcd/helm-controller β€” The Flux Helm Operator, once upon a time a solution for declarative Helming.
Go
649
star
6

helm-operator-get-started

Managing Helm releases with Flux Helm Operator
HTML
455
star
7

flux2-multi-tenancy

Manage multi-tenant clusters with Flux
Shell
418
star
8

helm-controller

The GitOps Toolkit Helm reconciler, for declarative Helming
Go
371
star
9

terraform-provider-flux

Terraform and OpenTofu provider for bootstrapping Flux
Go
358
star
10

source-controller

The GitOps Toolkit source management component
Go
237
star
11

kustomize-controller

The GitOps Toolkit Kustomize reconciler
Go
216
star
12

flux-get-started

Flux v1: Getting started with Flux and the Helm Operator
HTML
154
star
13

notification-controller

The GitOps Toolkit event forwarder and notification dispatcher
Go
150
star
14

webui

TypeScript
149
star
15

image-automation-controller

GitOps Toolkit controller that patches container image tags in Git
Go
131
star
16

image-reflector-controller

GitOps Toolkit controller that scans container registries
Go
85
star
17

flux-kustomize-example

Flux v1: Example of Flux using manifest generation with Kustomize
74
star
18

go-git-providers

Git provider client for Go
Go
72
star
19

flux-recv

Webhook receiver for Flux v1
Go
64
star
20

cues

Experimental CUE packages for generating Flux configurations
CUE
51
star
21

website

The Flux website and user documentation
HTML
50
star
22

pkg

Toolkit common packages
Go
48
star
23

community

Flux community content
Python
36
star
24

source-watcher

Example consumer of the GitOps Toolkit Source APIs
Go
30
star
25

flux-benchmark

Mean Time To Production benchmark for Flux
CUE
22
star
26

charts

Helm repository for Flux and Helm Operator charts
15
star
27

fluxctl-action

A GitHub Action to run fluxctl commands
Shell
15
star
28

gitsrv

Alpine git server used for Flux and Helm Operator end-to-end testing
Shell
12
star
29

multi-tenancy-team1

Tenant example repository
Open Policy Agent
10
star
30

flux2-monitoring-example

Prometheus monitoring for the Flux control plane
Shell
8
star
31

homebrew-tap

Homebrew formulas
Ruby
6
star
32

.github

5
star
33

golang-with-libgit2

Golang builder image, but with libgit2 included.
Go
4
star
34

stats

Flux project usage statistics
2
star
35

test-infra

Test infrastructure for the Flux project
Go
1
star