• Stars
    star
    106
  • Rank 314,145 (Top 7 %)
  • Language
    Dockerfile
  • License
    MIT License
  • Created about 5 years ago
  • Updated about 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Configure horizontal pod autoscaling with Istio metrics and Prometheus

istio-hpa

One of the advantages of using a service mesh like Istio is the builtin monitoring capability. You don't have to instrument your web apps in order to monitor the L7 traffic. The Istio telemetry service collects stats like HTTP request rate, response status codes and duration form the Envoy sidecars that are running alongside your apps. Besides monitoring these metrics can be used to drive autoscaling and canary deployments.

Istio HPA

What follows is a step-by-step guide on configuring HPA v2 with metrics provided by Istio Mixer. When installing Istio make sure that the telemetry service and Prometheus are enabled. If you're using the GKE Istio add-on, you'll have to deploy Prometheus as described here.

In order to use the Istio metrics together with the Horizontal Pod Autoscaler you'll need an adapter that can run Prometheus queries. Zalando made a general purpose metrics adapter for Kubernetes called kube-metrics-adapter. The Zalando adapter scans the HPA objects, executes promql queries (specified with annotations) and stores the metrics in memory.

Installing the custom metrics adapter

Clone the istio-hpa repository:

git clone https://github.com/stefanprodan/istio-hpa
cd istio-hpa

Deploy the metrics adapter in the kube-system namespace:

kubectl apply -f ./kube-metrics-adapter/

When the adapter starts, it will generate a self-signed cert and will register itself under the custom.metrics.k8s.io group.

The adapter is configured to query the Prometheus instance that's running in the istio-system namespace.

Verify the install by checking the adapter logs:

kubectl -n kube-system logs deployment/kube-metrics-adapter

Installing the demo app

You will use podinfo, a small Golang-based web app to test the Horizontal Pod Autoscaler.

First create a test namespace with Istio sidecar injection enabled:

kubectl apply -f ./namespaces/

Create the podinfo deployment and ClusterIP service in the test namespace:

kubectl apply -f ./podinfo/deployment.yaml,./podinfo/service.yaml

In order to trigger the auto scaling, you'll need a tool to generate traffic. Deploy the load test service in the test namespace:

kubectl apply -f ./loadtester/

Verify the install by calling the podinfo API. Exec into the load tester pod and use hey to generate load for a couple of seconds:

export loadtester=$(kubectl -n test get pod -l "app=loadtester" -o jsonpath='{.items[0].metadata.name}')
kubectl -n test exec -it ${loadtester} -- sh

~ $ hey -z 5s -c 10 -q 2 http://podinfo.test:9898

Summary:
  Total:	5.0138 secs
  Requests/sec:	19.9451

Status code distribution:
  [200]	100 responses

  $ exit

The podinfo ClusterIP service exposes port 9898 under the http name. When using the http prefix, the Envoy sidecar will switch to L7 routing and the telemetry service will collect HTTP metrics.

Querying the Istio metrics

The Istio telemetry service collects metrics from the mesh and stores them in Prometheus. One such metric is istio_requests_total, with it you can determine the rate of requests per second a workload receives.

This is how you can query Prometheus for the req/sec rate received by podinfo in the last minute, excluding 404s:

  sum(
    rate(
      istio_requests_total{
        destination_workload="podinfo",
        destination_workload_namespace="test",
        reporter="destination",
        response_code!="404"
      }[1m]
    )
  )

The HPA needs to know the req/sec that each pod receives. You can use the container memory usage metric from kubelet to count the number of pods and calculate the Istio request rate per pod:

  sum(
    rate(
      istio_requests_total{
        destination_workload="podinfo",
        destination_workload_namespace="test"
      }[1m]
    )
  ) /
  count(
    count(
      container_memory_usage_bytes{
        namespace="test",
        pod=~"podinfo.*"
      }
    ) by (pod)
  )

Configuring the HPA with Istio metrics

Using the req/sec query you can define a HPA that will scale the podinfo workload based on the number of requests per second that each instance receives:

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: podinfo
  namespace: test
  annotations:
    metric-config.object.istio-requests-total.prometheus/per-replica: "true"
    metric-config.object.istio-requests-total.prometheus/query: |
      sum(
        rate(
          istio_requests_total{
            destination_workload="podinfo",
            destination_workload_namespace="test"
          }[1m]
        )
      ) /
      count(
        count(
          container_memory_usage_bytes{
            namespace="test",
            pod=~"podinfo.*"
          }
        ) by (pod)
      )
spec:
  maxReplicas: 10
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: podinfo
  metrics:
    - type: Object
      object:
        metricName: istio-requests-total
        target:
          apiVersion: v1
          kind: Pod
          name: podinfo
        targetValue: 10

The above configuration will instruct the Horizontal Pod Autoscaler to scale up the deployment when the average traffic load goes over 10 req/sec per replica.

Create the HPA with:

kubectl apply -f ./podinfo/hpa.yaml

Start a load test and verify that the adapter computes the metric:

kubectl -n kube-system logs deployment/kube-metrics-adapter -f

Collected 1 new metric(s)
Collected new custom metric 'istio-requests-total' (44m) for Pod test/podinfo

List the custom metrics resources:

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .

The Kubernetes API should return a resource list containing the Istio metric:

{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "custom.metrics.k8s.io/v1beta1",
  "resources": [
    {
      "name": "pods/istio-requests-total",
      "singularName": "",
      "namespaced": true,
      "kind": "MetricValueList",
      "verbs": [
        "get"
      ]
    }
  ]
}

After a couple of seconds the HPA will fetch the metric from the adapter:

kubectl -n test get hpa/podinfo

NAME      REFERENCE            TARGETS   MINPODS   MAXPODS   REPLICAS
podinfo   Deployment/podinfo   44m/10    1         10        1

Autoscaling based on HTTP traffic

To test the HPA you can use the load tester to trigger a scale up event.

Exec into the tester pod and use hey to generate load for a 5 minutes:

kubectl -n test exec -it ${loadtester} -- sh

~ $ hey -z 5m -c 10 -q 2 http://podinfo.test:9898

Press ctrl+c then exit to get out of load test terminal if you wanna stop prematurely.

After a minute the HPA will start to scale up the workload until the req/sec per pod drops under the target value:

watch kubectl -n test get hpa/podinfo

NAME      REFERENCE            TARGETS     MINPODS   MAXPODS   REPLICAS
podinfo   Deployment/podinfo   25272m/10   1         10        3

When the load test finishes, the number of requests per second will drop to zero and the HPA will start to scale down the workload. Note that the HPA has a back off mechanism that prevents rapid scale up/down events, the number of replicas will go back to one after a couple of minutes.

By default the metrics sync happens once every 30 seconds and scaling up/down can only happen if there was no rescaling within the last 3-5 minutes. In this way, the HPA prevents rapid execution of conflicting decisions and gives time for the Cluster Autoscaler to kick in.

Wrapping up

Scaling based on traffic is not something new to Kubernetes, an ingress controllers such as NGINX can expose Prometheus metrics for HPA. The difference in using Istio is that you can autoscale backend services as well, apps that are accessible only from inside the mesh. I'm not a big fan of embedding code in Kubernetes yaml but the Zalando metrics adapter is flexible enough to allow this kind of custom autoscaling.

If you have any suggestion on improving this guide please submit an issue or PR on GitHub at stefanprodan/istio-hpa. Contributions are more than welcome!

More Repositories

1

dockprom

Docker hosts and containers monitoring with Prometheus, Grafana, cAdvisor, NodeExporter and AlertManager
5,809
star
2

podinfo

Go microservice template for Kubernetes
Go
5,097
star
3

AspNetCoreRateLimit

ASP.NET Core rate limiting middleware
C#
3,043
star
4

swarmprom

Docker Swarm instrumentation with Prometheus, Grafana, cAdvisor, Node Exporter and Alert Manager
Shell
1,862
star
5

timoni

Timoni is a package manager for Kubernetes, powered by CUE and inspired by Helm.
Go
1,306
star
6

WebApiThrottle

ASP.NET Web API rate limiter for IIS and Owin hosting
C#
1,284
star
7

mgob

MongoDB dockerized backup agent. Runs schedule backups with retention, S3 & SFTP upload, notifications, instrumentation with Prometheus and more.
Go
770
star
8

gitops-istio

A GitOps recipe for Progressive Delivery with Flux v2, Flagger and Istio
644
star
9

k8s-prom-hpa

Kubernetes Horizontal Pod Autoscaler with Prometheus custom metrics
Makefile
560
star
10

kustomizer

An experimental package manager for distributing Kubernetes configuration as OCI artifacts.
Go
279
star
11

MvcThrottle

ASP.NET MVC Throttling filter
C#
226
star
12

istio-gke

Istio service mesh walkthrough (GKE, CloudDNS, Flagger, OpenFaaS)
217
star
13

kube-tools

Kubernetes tools for GitHub Actions CI
Shell
190
star
14

k8s-scw-baremetal

Kubernetes installer for Scaleway bare-metal AMD64 and ARMv7
HCL
177
star
15

flux-local-dev

Flux local dev environment with Docker and Kubernetes KIND
CUE
144
star
16

mongo-swarm

Bootstrapping MongoDB sharded clusters on Docker Swarm
Go
126
star
17

aspnetcore-dockerswarm

ASP.NET Core orchestration scenarios with Docker
C#
119
star
18

helm-gh-pages

A GitHub Action for publishing Helm charts to Github Pages
Shell
102
star
19

flux-aio

Flux All-In-One distribution made with Timoni
CUE
97
star
20

openfaas-flux

OpenFaaS Kubernetes cluster state management with FluxCD
HTML
79
star
21

dockerdash

Docker dashboard built with ASP.NET Core, Docker.DotNet, SignalR and Vuejs
JavaScript
69
star
22

gitops-linkerd

Progressive Delivery workshop with Linkerd, Flagger and Flux
Shell
64
star
23

gitops-helm-workshop

Progressive Delivery for Kubernetes with Flux, Helm, Linkerd and Flagger
Smarty
61
star
24

hrval-action

Flux Helm Release validation GitHub action
Shell
59
star
25

scaleway-swarm-terraform

Setup a Docker Swarm Cluster on Scaleway with Terraform
HCL
46
star
26

dockes

Elasticsearch cluster with Docker
Shell
45
star
27

flagger-appmesh-gateway

A Kubernetes API Gateway for AWS App Mesh powered by Envoy
Go
44
star
28

kjob

Kubernetes job runner
Go
42
star
29

gitops-progressive-delivery

Progressive delivery with Istio, Weave Flux and Flagger
41
star
30

gh-actions-demo

GitOps pipeline with GitHub actions and Weave Cloud
Go
38
star
31

eks-hpa-profile

An eksctl gitops profile for autoscaling with Prometheus metrics on Amazon EKS on AWS Fargate
35
star
32

faas-grafana

OpenFaaS Grafana
Shell
35
star
33

dockelk

ELK log transport and aggregation at scale
Shell
32
star
34

openfaas-gke

Running OpenFaaS on Google Kubernetes Engine
Shell
30
star
35

prometheus.aspnetcore

Prometheus instrumentation for .NET Core
C#
29
star
36

syros

DevOps tool for managing microservices
Go
28
star
37

gh-actions

GitHub actions for Kubernetes and Helm workflows
Dockerfile
27
star
38

gitops-app-distribution

GitOps workflow for managing app delivery on multiple clusters
Shell
22
star
39

gitops-kyverno

Kubernetes policy managed with Flux and Kyverno
Shell
21
star
40

gitops-appmesh

Progressive Delivery on EKS with AppMesh, Flagger and Flux v2
Shell
19
star
41

flux-workshop-2023

Flux Workshop 2023-08-10
18
star
42

dockerd-exporter

Prometheus Docker daemon metrics exporter
Dockerfile
17
star
43

caddy-builder

Build Caddy with plugins as an Ingress/Proxy for OpenFaaS
Go
16
star
44

jenkins

Continuous integration with disposable containers
Shell
16
star
45

swarm-gcp-faas

Setup OpenFaaS on Google Cloud with Terraform, Docker Swarm and Weave
HCL
16
star
46

nexus

A Sonatype Nexus Repository Manager Docker image based on Alpine with OpenJDK 8
16
star
47

podinfo-deploy

A GitOps workflow for multi-env deployments
14
star
48

es-curator-cron

Docker Alpine image with Elasticsearch Curator cron job
Shell
13
star
49

caddy-dockerd

Caddy reverse proxy for Docker Remote API with IP filter
Shell
12
star
50

openfaas-certinfo

OpenFaaS function that returns SSL/TLS certificate information for a given URL
Go
12
star
51

eks-contour-ingress

Securing EKS Ingress with Contour and Let's Encrypt the GitOps way
Shell
11
star
52

gomicro

golang microservice prototype
Go
10
star
53

ngrok

ngrok docker image
10
star
54

rancher-swarm-weave

Rancher + Docker Swarm + Weave Cloud Scope integration
HCL
9
star
55

kubernetes-cue-schema

CUE schema of the Kubernetes API
CUE
9
star
56

swarm-logspout-logstash

Logspout adapter to send Docker Swarm logs to Logstash
Go
9
star
57

openfaas-promq

OpenFaaS function that executes Prometheus queries
Go
8
star
58

appmesh-eks

AWS App Mesh installer for EKS
Smarty
6
star
59

fninfo

OpenFaaS Kubernetes info function
Go
5
star
60

alpine-base

Alpine Linux base image
Dockerfile
4
star
61

gloo-flagger-demo

GitOps Progressive Delivery demo with Gloo, Flagger and Flux
4
star
62

k8s-grafana

Kubernetes Grafana v5.0 dashboards
Smarty
4
star
63

stefanprodan

My open source portfolio and tech blog
4
star
64

appmesh-dev

Testing eks-appmesh-profile
Shell
3
star
65

AndroidDevLab

Android developer laboratory setup
Batchfile
2
star
66

my-k8s-fleet

2
star
67

RequireJsNet.Samples

RequireJS.NET samples
JavaScript
2
star
68

BFormsTemplates

BForms Visual Studio Project Template
JavaScript
2
star
69

EsnServiceBus

Service Bus and Service Registry implementation based on RabbitMQ
C#
2
star
70

klog

Go
2
star
71

homebrew-tap

Homebrew formulas
Ruby
1
star
72

goc-proxy

A dynamic reverse proxy backed by Consul
Go
1
star
73

evomon

Go
1
star
74

loadtest

Hey load test container
1
star
75

openfaas-colorisebot-gke-weave

OpenFaaS colorisebot on GKE and Weave Cloud
Shell
1
star
76

xmicro

microservice HA prototype
Go
1
star