• Stars
    star
    136
  • Rank 267,670 (Top 6 %)
  • Language
    Shell
  • License
    Apache License 2.0
  • Created over 5 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Kinvolk service mesh benchmark suite

This is v2.0 release of our benchmark automation suite.

Please refer to the 1.0 release for automation discussed in our 2019 blog post.

Content

The suite includes:

Run a benchmark

Prerequisites:

  • cluster is set up
  • push gateway is installed
  • grafana dashboards are uploaded to Grafana
  • applications are installed
  1. Start the benchmark:
    $ helm install --create-namespace benchmark --namespace benchmark configs/benchmark
    This will start a 120s, 3000RPS benchmark against 10 emojivoto app instances, with 96 threads / simultaneous connections. See the helm chart values for all parameters, and use helm command line parameters for different values (eg. --set wrk2.RPS="500" to change target RPS).
  2. Refer to the "wrk2 cockpit" grafana dashboard for live metrics
  3. After the run concluded, run the "metrics-merger" job to update summary metrics:
    $ helm install --create-namespace --namespace metrics-merger \
                                    metrics-merger configs/metrics-merger/
    This will update the "wrk2 summary" dashboard.

Run a benchmark suite

The benchmark suite script will install applications and service meshes, and run several benchmarks in a loop.

Use the supplied scripts/run_benchmarks.sh to run a full benchmark suite: 5 runs of 10 minutes each for 500-5000 RPS, in 500 RPS increases, with 128 threads, for "bare metal", linkerd, and istio service meshes, against 60 emojivoto instances.

Creating prerequisites

Set up a cluster

We use Equinix Metal infrastructure to run the benchmark on, AWS S3 for sharing cluster state, and AWS Route53 for the clusters' public DNS entries. You'll need a Equinix Metal account and respective API token as well as an AWS account and accompanying secret key before you can provision a cluster.

You'll also need a recent version of Lokomotive.

  1. Make the authentication tokens available to the lokoctl command. You can do this in a couple of ways. For example, exporting your authentication tokens:

    export PACKET_AUTH_TOKEN="Your Equinix Metal Auth Token"
    export AWS_ACCESS_KEY_ID="your access key for AWS"
    export AWS_SECRET_ACCESS_KEY="your secret for the above access key"
    
  2. Create the Route53 hosted zone that will be used by the cluster. And an S3 bucket and Dynamo tables for storing Lokomotive's state. Check out Lokomotive's documentation for Using S3 as backend for how to do this.

  3. Create configs/lokocfg.vars by copying the example file configs/lokocfg.vars.example, and editing its contents.

    metal_project_id = "[ID of the equinix metal project to deploy to]"
    route53_zone = "[cluster's route53 zone]"
    state_s3_bucket = "[PRIVATE AWS S3 bucket to share cluster state in]"
    state_s3_key = "[key in S3 bucket, e.g. cluster name]"
    state_s3_region = "[AWS S3 region to use]"
    lock_dynamodb_table = "[DynamoDB table name to use as state lock, e.g. cluster name]"
    region_private_cidr =  "[Your Equinix Metal region's private CIDR]"
    ssh_pub_keys = [ "[Your SSH pub keys]" ]
    
  4. Review the benchmark cluster config in configs/equinix-metal-cluster.lokocfg

  5. Provision the cluster by running

    $ cd configs
    configs $ lokoctl cluster apply
    

After provisioning concluded, make sure to run

$ export KUBECONFIG=assets/cluster-assets/auth/kubeconfig

to get kubectl access to the cluster.

Deploy prometheus push gateway

The benchmark load generator will push intermediate run-time metrics as well as final latency metrics to a prometheus push gateway. A push gateway is currently not bundled with Lokomotive's prometheus component. Deploy by issuing

$ helm install pushgateway --namespace monitoring configs/pushgateway

Deploy demo apps

Demo apps will be used to run the benchmarks against. We'll use Linkerd's emojivoto.

We will deploy multiple instances of each app to emulate many applications in a cluster. For the default set-up, which includes 4 application nodes, we recommend deploying 30 "bookinfo" instances, and 40 "emojivoto" instances:

$ cd configs
$ for i in $(seq 10) ; do \
      helm install --create-namespace emojivoto-$i \ --namespace emojivoto-$i \
                configs/emojivoto \
  done

Upload Grafana dashboard

  1. Get the Grafana Admin password from the cluster
    $ kubectl -n monitoring get secret prometheus-operator-grafana -o jsonpath='{.data.admin-password}' | base64 -d && echo
    
  2. Forward the Grafana service port from the cluster
    $ kubectl -n monitoring port-forward svc/prometheus-operator-grafana 3000:80 &
    
  3. Log in to Grafana and create an API key we'll use to upload the dashboard
  4. Upload the dashboard:
    $ cd dashboard
    dashboard $ ./upload_dashboard.sh "[API KEY]" grafana-wrk2-cockpit.json localhost:3000
    

More Repositories

1

headlamp

An easy-to-use and extensible web UI for Kubernetes.
TypeScript
869
star
2

kube-spawn

A tool for creating multi-node Kubernetes clusters on a Linux machine using kubeadm & systemd-nspawn. Brought to you by the Kinvolk team.
Go
445
star
3

lokomotive

πŸͺ¦ DISCONTINUED Further Lokomotive development has been discontinued. Lokomotive is a 100% open-source, easy to use and secure Kubernetes distribution from the volks at Kinvolk
Go
322
star
4

traceloop

Now moved into `github.com/inspektor-gadget/inspektor-gadget/pkg/gadget-collection/gadgets/traceloop`. Tracing system calls in cgroups using BPF and overwritable ring buffers
Go
191
star
5

cloud-native-bpf-workshop

Shell
96
star
6

seccompagent

agent for handling seccomp descriptors for container runtimes
Go
38
star
7

go-shamir

A small CLI tool for Shamir's Secret Sharing written in Go, using Vault's Shamir implementation
Go
38
star
8

bpf-exercises

C
25
star
9

egress-filtering-benchmark

Go
23
star
10

benchmark-containers

Benchmark container build files for a variety of cloud-native benchmarks.
Shell
19
star
11

nomad-on-flatcar

Shell
15
star
12

racker

rack provisioning utility for Kinvolk projects
Shell
14
star
13

btfgen

C
9
star
14

container-escape-bounty

Shell
8
star
15

azure-cvm-tooling

Libraries and tools for Confidential Computing on Azure
Rust
7
star
16

demo

Assorted demos by the Kinvolk team
Shell
7
star
17

lerobot

A simple robot managing Let's Encrypt certificates.
Go
6
star
18

manifest

The build manifest for Flatcar releases
5
star
19

netcost

use BPF to calculate network ingress/egress for specified CIDRs
Go
4
star
20

nswatch

Go
3
star
21

test-odcds

demo of Envoy lazy config loading
Go
3
star
22

calico-hostendpoint-controller

Shell
2
star
23

eslint-config

Lint rules for all Kinvolk's Js/Ts projects
JavaScript
2
star
24

lokomotive-web-ui

A branded build of Headlamp with L8e related plugins.
TypeScript
1
star
25

dev-utils

Development utilities that are common to the company's projects.
Shell
1
star
26

docker

Go
1
star
27

contribution

Start here for contributing to Kinvolk projects
1
star
28

container-exercises

Training material, container exercises
Shell
1
star
29

awesome-virtual-cloud-native-events

Get an overview of all the virtual events happening related to cloud-native technologies
1
star
30

gangway-theme

Lokomotive theme for the gangway
HTML
1
star
31

downloads-tracker

A Github downloads tracker
Go
1
star
32

etcd

Go
1
star
33

tdx-demo-v2

Python
1
star