• Stars
    star
    646
  • Rank 69,672 (Top 2 %)
  • Language
  • License
    Apache License 2.0
  • Created over 6 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A GitOps recipe for Progressive Delivery with Flux v2, Flagger and Istio

gitops-istio

e2e e2analyzee license

This is a guide where you will get hands-on experience with GitOps and Progressive Delivery using Kubernetes and Istio.

Introduction

What is GitOps?

GitOps is a way to do Continuous Delivery, it works by using Git as a source of truth for declarative infrastructure and workloads. For Kubernetes this means using git push instead of kubectl apply/delete or helm install/upgrade.

In this workshop you'll be using GitHub to host the config repository and Flux as the GitOps delivery solution.

What is Progressive Delivery?

Progressive delivery is an umbrella term for advanced deployment patterns like canaries, feature flags and A/B testing. Progressive delivery techniques are used to reduce the risk of introducing a new software version in production by giving app developers and SRE teams a fine-grained control over the blast radius.

In this workshop you'll be using Flagger, Istio and Prometheus to automate Canary Releases and A/B Testing for your applications.

Progressive Delivery GitOps Pipeline

Prerequisites

You'll need a Kubernetes cluster v1.20 or newer with LoadBalancer support.

For testing purposes you can use Minikube with 2 CPUs and 4GB of memory:

minikube start --cpus='2' --memory='4g'

If using Minikube, run the following in a separate terminal window/tab for the duration of this workshop:

minikube tunnel

This assigns an External-IP to the istio-gateway service and allows the helm install to complete successfully.

Install jq, yq and the flux CLI with Homebrew:

brew install jq yq fluxcd/tap/flux

Verify that your cluster satisfies the prerequisites with:

flux check --pre

Fork this repository and clone it:

git clone https://github.com/<YOUR-USERNAME>/gitops-istio
cd gitops-istio

Cluster bootstrap

With flux bootstrap command you can install Flux on a Kubernetes cluster and configure it to manage itself from a Git repository. If the Flux components are present on the cluster, the bootstrap command will perform an upgrade if needed.

Bootstrap Flux by specifying your GitHub repository fork URL:

flux bootstrap git \
  --author-email=<YOUR-EMAIL> \
  --url=ssh://[email protected]/<YOUR-USERNAME>/gitops-istio \
  --branch=main \
  --path=clusters/my-cluster

The above command requires ssh-agent, if you're using Windows see flux bootstrap github documentation.

At bootstrap, Flux generates an SSH key and prints the public key. In order to sync your cluster state with git you need to copy the public key and create a deploy key with write access on your GitHub repository. On GitHub go to Settings > Deploy keys click on Add deploy key, check Allow write access, paste the Flux public key and click Add key.

When Flux has access to your repository it will do the following:

  • installs Istio using the Istio base, istiod and gateway Helm charts
  • waits for Istio control plane to be ready
  • installs Flagger, Prometheus and Grafana
  • creates the Istio public gateway
  • creates the prod namespace
  • creates the load tester deployment
  • creates the frontend deployment and canary
  • creates the backend deployment and canary

When bootstrapping a cluster with Istio, it is important to control the installation order. For the applications pods to be injected with Istio sidecar, the Istio control plane must be up and running before the apps.

With Flux v2 you can specify the execution order by defining dependencies between objects. For example, in clusters/my-cluster/apps.yaml we tell Flux that the apps reconciliation depends on the istio-system one:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: apps
  namespace: flux-system
spec:
  interval: 30m0s
  dependsOn:
    - name: istio-system
  sourceRef:
    kind: GitRepository
    name: flux-system
  path: ./apps

Watch Flux installing Istio first, then the demo apps:

watch flux get kustomizations

You can tail the Flux reconciliation logs with:

flux logs --all-namespaces --follow --tail=10

List all the Kubernetes resources managed by Flux with:

flux tree kustomization flux-system

Istio customizations

You can customize the Istio installation using the Flux HelmReleases located at istio/system/istio.yaml:

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: istio-gateway
  namespace: istio-system
spec:
  dependsOn:
    - name: istio-base
    - name: istiod
  # source: https://github.com/istio/istio/blob/master/manifests/charts/gateway/values.yaml
  values:
    autoscaling:
      enabled: true

After modifying the Helm release values, you can push the change to git and Flux will reconfigure the Istio control plane according to your changes.

You can monitor the Helm upgrades with:

flux -n istio-system get helmreleases --watch

To troubleshoot upgrade failures, you can inspect the Helm release with:

kubectl -n istio-system describe helmrelease istio-gateway

Flux issues Kubernetes events containing all the errors encountered during reconciliation. You could also configure Flux to publish the events to Slack, MS Team, Discord and others; please the notification guide for more details.

Istio control plane upgrades

Istio upgrades are automated using GitHub Actions and Flux.

Flux Istio Operator

When a new Istio version is available, the update-istio GitHub Action workflow will open a pull request with the manifest updates needed for upgrading Istio. The new Istio version is tested on Kubernetes Kind by the e2e workflow and when the PR is merged into the main branch, Flux will upgrade Istio on the production cluster.

Application bootstrap

When Flux syncs the Git repository with your cluster, it creates the frontend/backend deployment, HPA and a canary object. Flagger uses the canary definition to create a series of objects: Kubernetes deployments, ClusterIP services, Istio destination rules and virtual services. These objects expose the application on the mesh and drive the canary analysis and promotion.

# applied by Flux
deployment.apps/frontend
horizontalpodautoscaler.autoscaling/frontend
canary.flagger.app/frontend

# generated by Flagger
deployment.apps/frontend-primary
horizontalpodautoscaler.autoscaling/frontend-primary
service/frontend
service/frontend-canary
service/frontend-primary
destinationrule.networking.istio.io/frontend-canary
destinationrule.networking.istio.io/frontend-primary
virtualservice.networking.istio.io/frontend

Check if Flagger has successfully initialized the canaries:

kubectl -n prod get canaries

NAME       STATUS        WEIGHT
backend    Initialized   0
frontend   Initialized   0

When the frontend-primary deployment comes online, Flagger will route all traffic to the primary pods and scale to zero the frontend deployment.

Find the Istio ingress gateway address with:

kubectl -n istio-system get svc istio-ingressgateway -ojson | jq .status.loadBalancer.ingress

Open a browser and navigate to the ingress address, you'll see the frontend UI.

Canary releases

Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators like HTTP requests success rate, requests average duration and pod health. Based on analysis of the KPIs a canary is promoted or aborted, and the analysis result is published to Slack.

A canary analysis is triggered by changes in any of the following objects:

  • Deployment PodSpec (container image, command, ports, env, etc)
  • ConfigMaps and Secrets mounted as volumes or mapped to environment variables

For workloads that are not receiving constant traffic Flagger can be configured with a webhook, that when called, will start a load test for the target workload. The canary configuration can be found at apps/backend/canary.yaml.

Flagger Canary Release

Pull the changes from GitHub:

git pull origin main

To trigger a canary deployment for the backend app, bump the container image:

yq e '.images[0].newTag="6.1.1"' -i ./apps/backend/kustomization.yaml

Commit and push changes:

git add -A && \
git commit -m "backend 6.1.1" && \
git push origin main

Tell Flux to pull the changes or wait one minute for Flux to detect the changes on its own:

flux reconcile source git flux-system

Watch Flux reconciling your cluster to the latest commit:

watch flux get kustomizations

After a couple of seconds, Flagger detects that the deployment revision changed and starts a new rollout:

$ kubectl -n prod describe canary backend

Events:

New revision detected! Scaling up backend.prod
Starting canary analysis for backend.prod
Pre-rollout check conformance-test passed
Advance backend.prod canary weight 5
...
Advance backend.prod canary weight 50
Copying backend.prod template spec to backend-primary.prod
Promotion completed! Scaling down backend.prod

During the analysis the canary’s progress can be monitored with Grafana. You can access the dashboard using port forwarding:

kubectl -n istio-system port-forward svc/flagger-grafana 3000:80

The Istio dashboard URL is http://localhost:3000/d/flagger-istio/istio-canary?refresh=10s&orgId=1&var-namespace=prod&var-primary=backend-primary&var-canary=backend

Canary Deployment

Note that if new changes are applied to the deployment during the canary analysis, Flagger will restart the analysis phase.

A/B testing

Besides weighted routing, Flagger can be configured to route traffic to the canary based on HTTP match conditions. In an A/B testing scenario, you'll be using HTTP headers or cookies to target a certain segment of your users. This is particularly useful for frontend applications that require session affinity.

You can enable A/B testing by specifying the HTTP match conditions and the number of iterations:

  analysis:
    # schedule interval (default 60s)
    interval: 10s
    # max number of failed metric checks before rollback
    threshold: 10
    # total number of iterations
    iterations: 12
    # canary match condition
    match:
      - headers:
          user-agent:
            regex: ".*Firefox.*"
      - headers:
          cookie:
            regex: "^(.*?;)?(type=insider)(;.*)?$"

The above configuration will run an analysis for two minutes targeting Firefox users and those that have an insider cookie. The frontend configuration can be found at apps/frontend/canary.yaml.

Trigger a deployment by updating the frontend container image:

yq e '.images[0].newTag="6.1.1"' -i ./apps/frontend/kustomization.yaml

git add -A && \
git commit -m "frontend 6.1.1" && \
git push origin main

flux reconcile source git flux-system

Flagger detects that the deployment revision changed and starts the A/B testing:

$ kubectl -n istio-system logs deploy/flagger -f | jq .msg

New revision detected! Scaling up frontend.prod
Waiting for frontend.prod rollout to finish: 0 of 1 updated replicas are available
Pre-rollout check conformance-test passed
Advance frontend.prod canary iteration 1/10
...
Advance frontend.prod canary iteration 10/10
Copying frontend.prod template spec to frontend-primary.prod
Waiting for frontend-primary.prod rollout to finish: 1 of 2 updated replicas are available
Promotion completed! Scaling down frontend.prod

You can monitor all canaries with:

$ watch kubectl get canaries --all-namespaces

NAMESPACE   NAME      STATUS        WEIGHT
prod        frontend  Progressing   100
prod        backend   Succeeded     0

Rollback based on Istio metrics

Flagger makes use of the metrics provided by Istio telemetry to validate the canary workload. The frontend app analysis defines two metric checks:

    metrics:
      - name: error-rate
        templateRef:
          name: error-rate
          namespace: istio-system
        thresholdRange:
          max: 1
        interval: 30s
      - name: latency
        templateRef:
          name: latency
          namespace: istio-system
        thresholdRange:
          max: 500
        interval: 30s

The Prometheus queries used for checking the error rate and latency are located at flagger-metrics.yaml.

Bump the frontend version to 6.1.2, then during the canary analysis you can generate HTTP 500 errors and high latency to test Flagger's rollback.

Generate HTTP 500 errors:

watch curl -b 'type=insider' http://<INGRESS-IP>/status/500

Generate latency:

watch curl -b 'type=insider' http://<INGRESS-IP>/delay/1

When the number of failed checks reaches the canary analysis threshold, the traffic is routed back to the primary, the canary is scaled to zero and the rollout is marked as failed.

$ kubectl -n istio-system logs deploy/flagger -f | jq .msg

New revision detected! Scaling up frontend.prod
Pre-rollout check conformance-test passed
Advance frontend.prod canary iteration 1/10
Halt frontend.prod advancement error-rate 31 > 1
Halt frontend.prod advancement latency 2000 > 500
...
Rolling back frontend.prod failed checks threshold reached 10
Canary failed! Scaling down frontend.prod

You can extend the analysis with custom metric checks targeting Prometheus, Datadog and Amazon CloudWatch.

For configuring alerting of the canary analysis for Slack, MS Teams, Discord or Rocket see the docs.

Getting Help

If you have any questions about progressive delivery:

Your feedback is always welcome!

More Repositories

1

dockprom

Docker hosts and containers monitoring with Prometheus, Grafana, cAdvisor, NodeExporter and AlertManager
5,839
star
2

podinfo

Go microservice template for Kubernetes
Go
5,097
star
3

AspNetCoreRateLimit

ASP.NET Core rate limiting middleware
C#
3,043
star
4

swarmprom

Docker Swarm instrumentation with Prometheus, Grafana, cAdvisor, Node Exporter and Alert Manager
Shell
1,862
star
5

timoni

Timoni is a package manager for Kubernetes, powered by CUE and inspired by Helm.
Go
1,306
star
6

WebApiThrottle

ASP.NET Web API rate limiter for IIS and Owin hosting
C#
1,284
star
7

mgob

MongoDB dockerized backup agent. Runs schedule backups with retention, S3 & SFTP upload, notifications, instrumentation with Prometheus and more.
Go
770
star
8

k8s-prom-hpa

Kubernetes Horizontal Pod Autoscaler with Prometheus custom metrics
Makefile
560
star
9

kustomizer

An experimental package manager for distributing Kubernetes configuration as OCI artifacts.
Go
279
star
10

MvcThrottle

ASP.NET MVC Throttling filter
C#
226
star
11

istio-gke

Istio service mesh walkthrough (GKE, CloudDNS, Flagger, OpenFaaS)
217
star
12

kube-tools

Kubernetes tools for GitHub Actions CI
Shell
190
star
13

k8s-scw-baremetal

Kubernetes installer for Scaleway bare-metal AMD64 and ARMv7
HCL
177
star
14

flux-local-dev

Flux local dev environment with Docker and Kubernetes KIND
CUE
144
star
15

mongo-swarm

Bootstrapping MongoDB sharded clusters on Docker Swarm
Go
126
star
16

aspnetcore-dockerswarm

ASP.NET Core orchestration scenarios with Docker
C#
119
star
17

istio-hpa

Configure horizontal pod autoscaling with Istio metrics and Prometheus
Dockerfile
106
star
18

helm-gh-pages

A GitHub Action for publishing Helm charts to Github Pages
Shell
102
star
19

flux-aio

Flux All-In-One distribution made with Timoni
CUE
97
star
20

openfaas-flux

OpenFaaS Kubernetes cluster state management with FluxCD
HTML
79
star
21

dockerdash

Docker dashboard built with ASP.NET Core, Docker.DotNet, SignalR and Vuejs
JavaScript
69
star
22

gitops-linkerd

Progressive Delivery workshop with Linkerd, Flagger and Flux
Shell
64
star
23

gitops-helm-workshop

Progressive Delivery for Kubernetes with Flux, Helm, Linkerd and Flagger
Smarty
61
star
24

hrval-action

Flux Helm Release validation GitHub action
Shell
59
star
25

scaleway-swarm-terraform

Setup a Docker Swarm Cluster on Scaleway with Terraform
HCL
46
star
26

dockes

Elasticsearch cluster with Docker
Shell
45
star
27

flagger-appmesh-gateway

A Kubernetes API Gateway for AWS App Mesh powered by Envoy
Go
44
star
28

kjob

Kubernetes job runner
Go
42
star
29

gitops-progressive-delivery

Progressive delivery with Istio, Weave Flux and Flagger
41
star
30

gh-actions-demo

GitOps pipeline with GitHub actions and Weave Cloud
Go
38
star
31

eks-hpa-profile

An eksctl gitops profile for autoscaling with Prometheus metrics on Amazon EKS on AWS Fargate
35
star
32

faas-grafana

OpenFaaS Grafana
Shell
35
star
33

dockelk

ELK log transport and aggregation at scale
Shell
32
star
34

openfaas-gke

Running OpenFaaS on Google Kubernetes Engine
Shell
30
star
35

prometheus.aspnetcore

Prometheus instrumentation for .NET Core
C#
29
star
36

syros

DevOps tool for managing microservices
Go
28
star
37

gh-actions

GitHub actions for Kubernetes and Helm workflows
Dockerfile
27
star
38

gitops-app-distribution

GitOps workflow for managing app delivery on multiple clusters
Shell
22
star
39

gitops-kyverno

Kubernetes policy managed with Flux and Kyverno
Shell
21
star
40

gitops-appmesh

Progressive Delivery on EKS with AppMesh, Flagger and Flux v2
Shell
19
star
41

flux-workshop-2023

Flux Workshop 2023-08-10
18
star
42

dockerd-exporter

Prometheus Docker daemon metrics exporter
Dockerfile
17
star
43

caddy-builder

Build Caddy with plugins as an Ingress/Proxy for OpenFaaS
Go
16
star
44

jenkins

Continuous integration with disposable containers
Shell
16
star
45

swarm-gcp-faas

Setup OpenFaaS on Google Cloud with Terraform, Docker Swarm and Weave
HCL
16
star
46

nexus

A Sonatype Nexus Repository Manager Docker image based on Alpine with OpenJDK 8
16
star
47

podinfo-deploy

A GitOps workflow for multi-env deployments
14
star
48

es-curator-cron

Docker Alpine image with Elasticsearch Curator cron job
Shell
13
star
49

openfaas-certinfo

OpenFaaS function that returns SSL/TLS certificate information for a given URL
Go
12
star
50

caddy-dockerd

Caddy reverse proxy for Docker Remote API with IP filter
Shell
12
star
51

eks-contour-ingress

Securing EKS Ingress with Contour and Let's Encrypt the GitOps way
Shell
11
star
52

gomicro

golang microservice prototype
Go
10
star
53

ngrok

ngrok docker image
10
star
54

rancher-swarm-weave

Rancher + Docker Swarm + Weave Cloud Scope integration
HCL
9
star
55

kubernetes-cue-schema

CUE schema of the Kubernetes API
CUE
9
star
56

swarm-logspout-logstash

Logspout adapter to send Docker Swarm logs to Logstash
Go
9
star
57

openfaas-promq

OpenFaaS function that executes Prometheus queries
Go
8
star
58

appmesh-eks

AWS App Mesh installer for EKS
Smarty
6
star
59

fninfo

OpenFaaS Kubernetes info function
Go
5
star
60

alpine-base

Alpine Linux base image
Dockerfile
4
star
61

gloo-flagger-demo

GitOps Progressive Delivery demo with Gloo, Flagger and Flux
4
star
62

k8s-grafana

Kubernetes Grafana v5.0 dashboards
Smarty
4
star
63

stefanprodan

My open source portfolio and tech blog
4
star
64

appmesh-dev

Testing eks-appmesh-profile
Shell
3
star
65

AndroidDevLab

Android developer laboratory setup
Batchfile
2
star
66

my-k8s-fleet

2
star
67

RequireJsNet.Samples

RequireJS.NET samples
JavaScript
2
star
68

BFormsTemplates

BForms Visual Studio Project Template
JavaScript
2
star
69

EsnServiceBus

Service Bus and Service Registry implementation based on RabbitMQ
C#
2
star
70

klog

Go
2
star
71

homebrew-tap

Homebrew formulas
Ruby
1
star
72

evomon

Go
1
star
73

loadtest

Hey load test container
1
star
74

openfaas-colorisebot-gke-weave

OpenFaaS colorisebot on GKE and Weave Cloud
Shell
1
star
75

xmicro

microservice HA prototype
Go
1
star
76

goc-proxy

A dynamic reverse proxy backed by Consul
Go
1
star