• Stars
    star
    372
  • Rank 114,858 (Top 3 %)
  • Language
    Go
  • License
    Other
  • Created over 4 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Workload Benchmark for Kubernetes

K-Bench

(K-Bench) is a framework to benchmark the control and data plane aspects of a Kubernetes infrastructure. K-Bench provides a configurable way to prescriptively create and manipulate Kubernetes resources at scale and eventually provide the relevant control plane and dataplane performance metrics for the target infrastructure. Example operations include CREATE, UPDATE, LIST, DELETE, RUN, COPY etc. on different types of Kubernetes resources including Pod, Deployment, Service, ReplicationController, etc.

K-Bench has the following features:

  • It allows users to control the client side concurrency, the operations, and how these different types of operations are executed in sequence or in parallel. In particular, user can define, through a config file, a workflow of operations for supported resources. User can also specify parameters such as image for Pod, number of replicas for Deployment, etc.
  • For control-plane (life-cycle management) performance, the benchmark complements the server-side timing adopted in many other existing benchmarks with a client-side approach, based on Kubernetes' event callback mechanism. This addressed the coarse-grained timestamp issue on the server-side and enhanced the measuring precision.
  • K-Bench also includes benchmarks built into it, which allows users to study the data plane performance by independently scaling up and scaling out infrastructure resource usage in terms of compute, memory, I/O and network. The framework comes with blueprints for running these benchmarks in various ways at scale to evaluate specific aspects of a K8s infrastructure. For E.g., one can use the integrated dp_network_internode test to automatically place two pods on two different K8s nodes to measure inter-pod network latency and bandwidth.
  • It supports docker compose files and convert them into Kubernetes spec files.
  • It is integrated with Prometheus, which can be enabled by simply including prometheus configuration options in the benchmark config file. K-Bench also supports Wavefront, an enterprise-grade data analytic and monitoring platform.

Architecture

The above diagram shows an overview of the benchmark. Upon starting, a json config file is parsed for infrastructure and operation information. Then a sequence of operations are generated, each of which may contain a list of actions such as create, list, scale, etc. The operations run one by one, optionally in a blocking manner. Actions inside one operation, however, run in parallel with Go routines. Actions supported for different types of resources are defined in their respective managers. The resource managers also provide metrics collection mechanism and produce Wavefront consumable data. The benchmark uses client-go to communicate with the Kubernetes cluster.

K-Bench can be extremely flexible in that it allows virtually any supported actions performed with user chosen parameters on selected resource objects serially, in parallel, or in a hybrid manner. To achieve this, a crucial problem to address is to determine how actions and resources are handled or partitioned by different threads. We call this workload dispatch. In K-Bench, dispatch for actions is straightforward: the configuration parser scans the entire config file and determines the maximum concurrency for each operation by summing up all the Count fields of different resource types in the operation. The dispatcher spawns and maintains all go routines so that corresponding actions of different types of resources in an operation are fully parallelized. Different actions for the same resource in the operation share the same go routine and are executed in order. To achieve dispatch for resource objects, K-Bench maintains two types of labels, namely k-label and u-label respectively, for each resource object. K-Bench assigns each go routine a TID and an operation an OID, which are also attached as k-label to the relevant objects. Other metadata such as resource type, app name, benchmark name, etc., are also attached as k-label when a resource is created. K-Bench provides predefined label matching options such as MATCH_GOROUTINE, MATCH_OPERATION to select objects created by a specified routine in certain operations. Users labels passed through the benchmark configuration specification are attached to resources as u-labels, which can be also used for resource dispatch.

Control Plane Metrics

After a successful run, the benchmark reports metrics (e.g., number of requests, API invoke latency, throughput, etc.) for the executed operations on various resource types. One resource type whose metrics need special consideration is Pod, as its operations are typically long-running and asynchronous. For Pod (and related resource types such as Deployment), we introduce two sets of metrics, server-side and client-side, to better summarize its performance from different perspectives. The server-side metrics design for Pod in K-Bench inherits the definition suggested by the kubernetes sig group (the exact way those Pod metrics are defined can be revealed from the density and performance test in the e2e: density_test.go). The client-side set of metrics, collected by an event callback mechanism, is a more accurate reflection on the time taken for Pod states to transition end-to-end. The below table describes all the supported metrics:

Metric 1 Definition Applied Resource Type Notes & References & Sources
Pod creation latency (server) scheEvent.FirstTimestamp (the FirstTimestamp of a scheduling event associated with a pod) - pod.CreationTimestamp (the CreationTimestamp of the pod object) Pod Deployment density.go
Pod scheduling latency (server) pod.Status.StartTime (the server timestamp indicating when a pod is accepted by kubelet but image not pulled yet) - scheEvent.FirstTimestamp (the first timestamp of a scheduled event related to a pod) Pod Deployment density.go
Pod image pulling latency (server) pulledEvent.FirstTimestamp (the FirstTimestamp of an event with "Pulled" as the reason associated with a pod) - pod.Status.StartTime (the timestamp indicating when a pod is accepted by kubelet but image not pulled yet) Pod Deployment a new metric defined in pod_manager.go for kbench
Pod starting latency (server) max(pod.Status.ContainerStatuses[...].State.Running.StartedAt) (the StartedAt timestamp for the last container that gets into running state inside a pod) - pulledEvent.FirstTimestamp (the FirstTimestamp of an event with "Pulled" as the reason associated with a pod) Pod Deployment density.go
Pod startup total latency (server) max(pod.Status.ContainerStatuses[...].State.Running.StartedAt) (the StartedAt timestamp for the last container that gets into running state inside a pod) - pod.CreationTimestamp (the CreationTimestamp of the pod object) Pod Deployment density.go
Pod client-server e2e latency the first time when client watches that pod.Status.Phase becomes running - pod.CreationTimestamp (the server-side CreationTimestamp of the pod object) Pod Deployment this is similar to the "watch" latency in e2e test
Pod scheduling latency (client) the first time when client watches that pod.Status.Conditions[...] has a PodScheduled condition - the first time client watches the pod (and thus does not have a PodScheduled condition) Pod Deployment a new metric defined in pod_manager.go in kbench
Pod initialization latency (client) the first time when client watches that pod.Status.Conditions[...] has a PodInitialized condition - the first time when client watches that pod.Status.Conditions[...] has a PodScheduled condition Pod Deployment a new metric defined in pod_manager.go in kbench
Pod starting latency (client) the first time when client watches that pod.Status.Phase becomes running - the first time when client watches that pod.Status.Conditions[...] has a PodInitialized condition Pod Deployment a new metric defined in pod_manager.go Note that there is no client-side watch event for image pulling, so this metric includes the image pulling.
Pod startup total latency (client) the first time when client watches that pod.Status.Phase becomes running - the first time client watches the pod (and thus does not have a PodScheduled condition) Pod Deployment a new metric defined in pod_manager.go
Pod creation throughput sum(number of running pods of every operation that has pod actions / 2) / sum(median Pod startup total latency of every operation that has pod actions) Pod Deployment a new metric defined in pod_manager.go.
API invoke latency latency for an API to return All resource types a new metric defined in pod_manager.go.

Data Plane Workloads and Metrics

Metric 1 Resource Category Benchmark Notes
Transaction throughput CPU/Memory Redis Memtier Maximum achievable throughput aggregated across pods in a cluster
Transaction latency CPU/Memory Redis Memtier Latency for the injected SET/GET transactions
Pod density CPU/Memory Redis Memtier Transaction throughput and latency for given pod density
I/O bandwidth (IOPS) I/O FIO Synchronous and Asynchronous Rd/Wr bandwidth for 70-30, 100-0 and 0-100 read-write ratios, block sizes on various K8s volumes
I/O Latency (ms) I/O Ioping Disk I/O latency on Ephemeral and Persistent K8s volumes
Network b/w Network Iperf3 Inter-pod TCP, UDP performance with varying pod placements on nodes, zones
Network Latency (ms) Network Qperf Inter-pod network latency for TCP and UDP packets with varying pod placements

Infrastructure Diagnostic Telemetry

In addition to the above metrics that the benchmark reports, K-Bench can be configured to report Wavefront- and Prometheus-defined metrics that include: memory, CPU, storage utilization of nodes, namespaces, pods, cluster level statistics, bytes transferred and received rates between pods, uptime, infrastructure statistics, etc.

To use Wavefront monitoring of the nodes, one can install the Waverunner component using pkg/waverunner/install.sh. Invoking this script without any parameters will give the help menu. To start telemetry, invoke pkg/waverunner/WR_wcpwrapper.sh as follows:

./WR_wcpwrapper.sh -r <run_tag> -i <Host_IP_String> -w <Wavefront_source> [-o <output_folder> -k <ssh_key_file> -p <host_passwd>]

The above command defaults to /tmp for output folder and a null host password.

To use Prometheus as your metrics monitoring mechanism, configure the PrometheusManifestPaths option in the K-Bench config file. Please see top level configuration options section below and prometheus readme.

K-Bench Quickstart Guide

To use K-Bench, clone this repo, install the benchmark, and then you can use it to run workload against your k8s cluster by following the below instructions.

Install using Script

On a Linux box (tested on Ubuntu 16.04), just invoke:

./install.sh

to install the benchmark.

If you would like the kbench binary to be copied to /usr/local/bin so that you can directly run without specifying the full kbench path, run it with sudo.

On systems like Ubuntu, just being able to use sudo is enough and one does not explicitly need to be the "root" user. Also, please ensure that the K8s nodes and the client on which you run K-Bench have their times synchronized as K-Bench uses both client and server side time stamps to calculate latencies.

Run the Benchmark

Once the installation completes, you can start using K-Bench. To run the benchmark, you need to make sure your

~/.kube/config file or the KUBECONFIG environment variable points to a valid and running Kubernetes cluster. To verify this, you may install kubectl (this expects a ~/.kube/config file in place, which you can copy from the Master node) and simply run:

kubectl get nodes

Once you verify that you have a running Kubernetes cluster, the workload can be run directly using the kbench go binary or using the run.sh script. The default benchmark config file ./config/default/config.json specifies the workload you would like to run. You can modify the config file to run workload of your choice. After that, simply run:

kbench

or

./run.sh

If your config file is at a different location, please use -benchconfig option if invoking the kbench binary directly:

kbench -benchconfig filepath

If your filepath is a directory, the benchmark will run them one by one.

When using the run.sh script, invoking this script with -h provides the following help menu:

Usage: ./run.sh -r <run-tag> [-t <comma-separated-tests> -o <output-dir>]
Example: ./run.sh -r "kbench-run-on-XYZ-cluster"  -t "cp_heavy16,dp_netperf_internode,dp_fio" -o "./"

Valid test names:

all || all_control_plane || all_data_plane || cp_heavy_12client || cp_heavy_8client || cp_light_1client || cp_light_4client || default || dp_fio || dp_network_internode || dp_network_interzone || dp_network_intranode || dp_redis || dp_redis_density || predicate_example || 

To get details about each of the existing workloads, please check the individual README or config.json in config/<test-name> folder. For more details about how to configure workload, please check the examples under the ./config directory, or read the benchmark configuration section of this document.

Adding a new test to use with run.sh

Add a new folder in config/<test-name>, include the run configuration as config/<test-name>/config.json and run the test by providing the <test-name> as input to the -t option of run.sh

Alternative Installing Method: Install Manually with Go (old way with GOROOT and GOPATH)

First, you need to setup your Go environment. Download Go and unzip it to a local directory (e.g., /root/go) and point your GOROOT environment variable there. Also, set your GOPATH (e.g., /root/gocode). The below instructions are example for your reference (assuming you download Go to /root/go):

cd /root/go

gunzip go***.linux-amd64.tar.gz

tar -xvf go***.linux-amd64.tar

mkdir /root/gocode && cd gocode/

export GOPATH=/root/gocode

export GOROOT=/root/go

export PATH=$PATH:/root/go/bin

Clone or download benchmark source code to $GOPATH/src/k-bench (create this directory if it does not exist) using Git or through other means.

mkdir -p $GOPATH/src

mkdir -p $GOPATH/src/k-bench

After you have all the files under $GOPATH/src/k-bench, cd to that directory.

It is also handy to include into your PATH variable locations where Go typically places and finds binaries and tools:

export PATH=$PATH:$GOROOT/bin:$GOPATH/bin

Now, you are ready to build the benchmark. To build, you can either use the below command to install the kbench binary into $GOPATH/bin:

go install cmd/kbench.go

or run (under the $GOPATH/src/k-bench directory) the below to generate the kbench executable under $GOPATH/src/k-bench/bin:

mkdir -p bin && cd bin && go build k-bench/cmd/kbench.go

Benchmark Configuration

The benchmark is highly configurable through a json config file. The ./config/default/config.json file is provided as an example (this file is also the default benchmark config file if user does not specify one through the -benchconfig option). More config examples can be found under the ./config directory and its subdirectories.

Top Level Configuration Options

At top level, the benchmark supports the following configuration options:

  • BlockingLevel: Currently can be configured as "operation", so that the benchmark waits (for pod creation and deletion) until the previous operation is completely done before executing the next. If this option is not specified, the benchmark only waits the specified sleep time after each action and then proceeds to the next operation, even if there are outstanding actions in the previous operation.
  • Timeout: Used with BlockingLevel. This is the longest time in mini-seconds for the benchmark to wait after each operation if BlockingLevel is specified. By default it is 3 minutes.
  • CheckingInterval: The time interval in mini-seconds to check whether all the actions in the previous operation complete. By default it is 3 seconds.
  • Cleanup: Whether clearing all the created resources after a run completes. By default it is false.
  • Operations: This is an array of operation structures, each of which can contain a list of resource action configurations. Each resource action configuration includes information such as resource type, actions (a list of actions to be performed in order), count (number of actions to be executed in parallel), sleeptimes (time to sleep after each parallel batch of actions), image to use, etc. For details please refer to below Operation Configuration.
  • RuntimeInMinutes: The time in minutes that benchmark should run. If all the configured operations are completed but time elapsed hasn't reached the specified time, the benchmark will loop over and run the configured operations again.
  • PrometheusManifestPaths: This option, if configured, installs and enables prometheus stack for cluster monitoring. See prometheus readme for details.
  • WavefrontPathDir: This option tells the benchmark where to store Wavefront output logs.
  • SleepTimeAfterRun: In case user wants to add sleep time after each run, this is the option to use.
  • Tags: If specified, the wavefront output and logs will be tagged with the given keys and values.

Operation Configuration

In each operation of the "Operations" array, users can specify one or more resource types, and each resource type can have a list of actions to perform, and each action may accept some options. Below are example (and a subset of all supported) resource types with the corresponding actions and options:

  • Pods: Pod resource supports "CREATE", "LIST", "GET", "RUN", "COPY", "UPDATE", and "DELETE" actions. For the CREATE action, users can specify operation options including "ImagePullPolicy" (where you can specify one of "IfNotPresent", "Always", and "Never"), "Image", etc. Also, if user specifies a "YamlSpec" option, then the CREATE action will first try to use the yaml file to create the Pod before using other explicit options such as "ImagePullPolicy" and "Image". For the RUN action, user can provide the "Command" option, which is the command to be executed in the specified Pod(s). For COPY action, user can specify LocalPath, ContainerPath, and Upload options. Certain Pod actions (LIST, RUN, COPY) can be applied to a selected/filtered list of Pods using the LabelKey and LabelValue options. For all pod actions, "Count" option specifies the concurrency and "SleepTime" specifies the sleep time user would like to incur after each action.
  • Deployments: Deployment resource type supports all the options that Pod does, and in addition it also supports "SCALE" action and "NumReplicas" option for its CREATE action. Currently it does not support "RUN" and "COPY" action yet.
  • Namespaces: Namespace resource type supports "CREATE", "LIST", "GET", "UPDATE", and "DELETE" actions. It has "Count" option (number of namespace actions to be performed in parallel), similar to all the above resource types.
  • Services: Service resource type supports "CREATE", "LIST", "GET", "UPDATE", and "DELETE" actions. It has "SleepTimes", "Count", and "YamlSpec" options.
  • ReplicationControllers: ReplicationController resource type supports the same options and actions as Deployment.

The benchmark also supports other resource types including ConfigMap, Event, Endpoints, ComponentStatus, Node, LimitRange, PersistentVolume, PersistentVolumeClaim, PodTemplate, ResourceQuota, Secret, ServiceAccount, Role, RoleBinding, ClusterRole, ClusterRoleBinding, etc.

In addition to different types of resource types, in an operation you can also specify a RepeatTimes option to run the operation for a given number of times.

For more supported resources, actions, and configuration options in K-Bench, please checkout the sample config files under ./config or source code.

Operation Predicate

To simplify synchronization and orchestrate operation execution flow, the benchmark supports Predicate, which blocks an operation's execution until certain conditions are met. A predicate is configured through the below options:

  • Resource: It has two possible formats: namespace/kind/[object name/][container name], or group/version/namespaces/namespace/kind/[object name/][container name]. With the first format, you specify a namespace, a kubernetes resource kind, optionally with an object name and a container name (only valid if the resource kind is Pod). The benchmark will search resources with default kubernetes group ("") and API version (v1). With the second format, the benchmark will search resource using the given group, kind, and version. Once this predicate (called a resource predicate) is specified, the matched resource must exist before the operation can be executed. If a container name is specified, the corresponding pod has to be in Running phase in order for the operation to proceed.
  • Labels: Labels has format of key1=value1;key2=value2;.... Labels are used if only namespace and kind are given in the Resource option to filter resource objects.
  • Command: This predicate executes a command (inside the container if a container name is specified in the Resource option, or on the box where the bencmark is invoked). It works with Expect below.
  • Expect: This option currently supports formats such as contains:string or !contains:string. With this option configured, the benchmark checks the output of the Command execution, and proceed only if the output is expected.

For examples on how to use predicates, you may check config file samples under ./config/predicate_example.

Contributing to the Benchmark

Please contact the project members and read CONTRIBUTING.md if you are interested in making contributions.

Project Leads

Karthik Ganesan Email: [email protected] for questions and comments

Contributors

Yong Li Helen Liu

Footnotes

  1. For each latency related metric, there are four values reported: median, min, max, and 99-percentile. ↩ ↩2

More Repositories

1

velero

Backup and migrate Kubernetes applications and their persistent volumes
Go
8,593
star
2

kubeapps

A web-based UI for deploying and managing applications in Kubernetes clusters
Go
4,954
star
3

sonobuoy

Sonobuoy is a diagnostic tool that makes it easier to understand the state of a Kubernetes cluster by running a set of Kubernetes conformance tests and other plugins in an accessible and non-destructive manner.
Go
2,822
star
4

community-edition

VMware Tanzu Community Edition is no longer an actively maintained project. Code is available for historical purposes only.
Go
1,333
star
5

carvel-ytt

YAML templating tool that works on YAML structure instead of text
Go
1,286
star
6

carvel-kapp

kapp is a simple deployment tool focused on the concept of "Kubernetes application" — a set of resources with the same label
Go
707
star
7

pinniped

Pinniped is the easy, secure way to log in to your Kubernetes clusters.
Go
493
star
8

cartographer

Cartographer is a Supply Chain Choreographer.
Go
436
star
9

helm-charts

Contains Helm charts for Kubernetes related open source tools
Mustache
248
star
10

carvel

Carvel provides a set of reliable, single-purpose, composable tools that aid in your application building, configuration, and deployment to Kubernetes. This repo contains information regarding the Carvel open-source community.
HTML
243
star
11

velero-plugin-for-aws

Plugins to support Velero on AWS
Go
208
star
12

tanzu-framework

Tanzu Framework provides a set of building blocks to build atop of the Tanzu platform and leverages Carvel packaging and plugins to provide users with a much stronger, more integrated experience than the loose coupling and stand-alone commands of the previous generation of tools.
Go
198
star
13

cluster-api-provider-bringyourownhost

Kubernetes Cluster API Provider BYOH for already-provisioned hosts running Linux.
Go
196
star
14

carvel-kbld

kbld seamlessly incorporates image building and image pushing into your development and deployment workflows
Go
182
star
15

crash-diagnostics

Crash-Diagnostics (Crashd) is a tool to help investigate, analyze, and troubleshoot unresponsive or crashed Kubernetes clusters.
Go
176
star
16

carvel-vendir

Easy way to vendor portions of git repos, github releases, helm charts, docker image contents, etc. declaratively
Go
168
star
17

carvel-kapp-controller

Continuous delivery and package management for Kubernetes.
Go
153
star
18

carvel-kwt

Kubernetes Workstation Tools CLI
Go
145
star
19

carvel-imgpkg

Store application configuration files in Docker/OCI registries
Go
134
star
20

tanzu-dev-portal

Content for Tanzu dev portal
HTML
132
star
21

velero-plugin-for-csi

Velero plugins for integrating with CSI snapshot API
Go
111
star
22

cloud-native-security-inspector

This project scans and assesses workloads in Kubernetes at runtime. It can apply protection rules to workloads to avoid further risks as well.
Go
104
star
23

velero-plugin-for-microsoft-azure

Plugins to support Velero on Microsoft Azure
Go
99
star
24

vm-operator

Self-service manage your virtual infrastructure...
Go
86
star
25

cloud-suitability-analyzer

Automated, rule based source code scanning to determine cloud suitability
Go
77
star
26

secrets-manager

VMware Secrets Manager for Cloud-Native Apps is a lightweight secrets manager to protect your sensitive data. It’s perfect for edge deployments where energy and footprint requirements are strict—See more: https://vsecm.com/
Go
77
star
27

velero-plugin-for-gcp

Plugins to support Velero on Google Cloud Platform (GCP)
Go
75
star
28

sonobuoy-plugins

Plugins for Sonobuoy
Go
60
star
29

carvel-secretgen-controller

secretgen-controller provides CRDs to specify what secrets need to be on Kubernetes cluster (to be generated or not)
Go
57
star
30

velero-plugin-for-vsphere

Plugin to support Velero on vSphere
Go
57
star
31

service-installer-for-vmware-tanzu

Service Installer for VMware Tanzu is a one-click automation solution that enables VMware field engineers to easily and rapidly install, configure, and operate VMware Tanzu services across a variety of cloud infrastructures.
Python
55
star
32

velero-plugin-example

Example project for plugins for Velero, a Kubernetes disaster recovery utility
Go
50
star
33

application-accelerator-samples

Project for samples to be used with "Application Accelerator for VMware Tanzu" which is part of "VMware Tanzu Platform".
Java
45
star
34

terraform-provider-carvel

Carvel Terraform provider with resources for ytt and kapp to template and deploy to Kubernetes
Go
43
star
35

asset-relocation-tool-for-kubernetes

A tool for relocating Kubernetes Assets
Go
38
star
36

astrolabe

Data protection framework for complex applications
Go
37
star
37

carvel-community

Carvel provides a set of reliable, single-purpose, composable tools that aid in your application building, configuration, and deployment to Kubernetes. This repo contains information regarding the Carvel open-source community.
Shell
35
star
38

servicebinding

Service Bindings for Kubernetes
Go
32
star
39

cert-injection-webhook

Provides a Kubernetes webhook to inject CA certificates and proxy environment variables into pods.
Go
31
star
40

carvel-simple-app-on-kubernetes

K8s simple Go app example deployed with k14s tools
Shell
29
star
41

vsphere-kubernetes-drivers-operator

vSphere Kubernetes Driver Operator to simplify and automate the lifecycle management of CSI and CPI for Kubernetes cluster running on vSphere
Go
28
star
42

application-portfolio-auditor

Application Portfolio Auditor is a tool assessing cloud readiness, quality, and security of large sets of apps. It gathers and aggregates insights of multiple software analyzers.
Shell
28
star
43

graph-framework-for-microservices

Graph Framework for Microservices is a platform software stack that bootstraps and accelerates cloud-native microservice development, that is out-of-the-box ready to thrive in the ever challenging world of distributed systems and SaaS.
Go
28
star
44

load-balancer-operator-for-kubernetes

A Cluster API speaking operator for load balancers
Go
27
star
45

tanzu-cli

The Tanzu Core CLI project provides the core functionality of the Tanzu CLI. The CLI is based on a plugin architecture where CLI command functionality can be delivered through independently developed plugin binaries
Go
27
star
46

sources-for-knative

VMware-related event sources for Knative.
Go
26
star
47

kpack-cli

A command line interface for interacting with kpack.
Go
26
star
48

cross-cluster-connectivity

Multi-cluster DNS for Cluster API
Go
26
star
49

thepodlets

A VMware cloud native podcast. Exploring cloud native, one buzzword at a time!
HTML
25
star
50

carvel-ytt-library-for-kubernetes

ytt (https://github.com/k14s/ytt) library that includes reusable K8s components (app, ...)
Shell
21
star
51

function-buildpacks-for-knative

Buildpacks for Knative Functions
Go
21
star
52

vsphere-tanzu-kubernetes-grid-image-builder

For building virtual machine images with VMware vSphere
Python
18
star
53

carvel-setup-action

Github Action for setting up Carvel apps (ytt, kbld, kapp, kctrl, kwt, imgpkg and vendir)
TypeScript
17
star
54

apps-cli-plugin

Apps Plugin for the Tanzu CLI
Go
16
star
55

cartographer-conventions

Conventions provide a mechanism for platform operators to define cross cutting behavior that is applied to Kubernetes resources by understanding the developers intent and the semantics of the resources being advised.
Go
15
star
56

community-engagement

Go
14
star
57

projects-operator

Provides a `Project` CRD and controller for k8s to help with organising resources
Go
13
star
58

vscode-ytt

Visual Studio Code extension for working with ytt yaml files
12
star
59

carvel-ytt-library-for-kubernetes-demo

Demo of ytt + kapp + k8s-lib to deploy a simple app with basic autoscaling
Go
10
star
60

tanzu-toolkit-for-visual-studio

C#
9
star
61

nsx-operator

Kubernetes Operator for managing NSX network resources
Go
9
star
62

dependency-labeler

Dependency Labeler adds metadata about a container image's dependencies as a label to the container image. Formerly maintained by the NavCon team, currently maintained by the Source Insight Tooling team
Go
9
star
63

tanzu-application-platform-reference-service-packages

Reference Service Instance Packages for Tanzu Application Platform.
Shell
9
star
64

homebrew-carvel

Provides tools from https://carvel.dev via Homebrew package.
Ruby
8
star
65

carvel-ytt-starter-for-kubernetes

Use this repo as an example for organizing ytt templates within your application repo
HTML
8
star
66

tanzu-plugin-runtime

The Tanzu Plugin Runtime provides functionality and helper methods to develop Tanzu CLI plugins
Go
8
star
67

carvel-docker-image

Source for ghcr.io/vmware-tanzu/carvel-docker-image:latest that includes various Carvel tools
Dockerfile
8
star
68

homebrew-tanzu

Homebrew tap and formulas for installing Tanzu Community Edition
Ruby
7
star
69

build-tooling-for-integrations

This project enables developers to start building and packaging new Tanzu integrations, including cluster packages and custom CLI commands.
Go
7
star
70

app-migrator-for-cloud-foundry

A CLI tool for exporting and importing Cloud Foundry applications between Cloud Foundry installations.
Go
7
star
71

package-for-cartographer

carvel-based Packaging for Cartographer
Shell
6
star
72

tanzu-source-controller

Tanzu Source Controller enables app devs to fetch OCI images and maven artifacts from remote source code repository. The controller follows the spirit of the FluxCD Source Controller.
Go
6
star
73

image-registry-operator-api

As part of the vSphere on Tanzu project, this VM Image Service offers a Kubernetes API to upload/download/share VM images backed by vSphere Content Library.
Go
5
star
74

observability-event-resource

Go
5
star
75

service-instance-migrator-for-cloud-foundry

A CLI tool for exporting and importing Cloud Foundry service instances between Cloud Foundry installations.
Go
5
star
76

ytt.vim

syntax for ytt
Vim Script
5
star
77

package-for-kpack

This repo will house the carvel tooling specific configuration and templating for a deployment of kpack (https://github.com/pivotal/kpack) that will be leveraged by TCE and TBS
Shell
5
star
78

package-for-kubeapps

This repo will house the carvel tooling specific configuration and templating for a deployment of Kubeapps (https://github.com/vmware-tanzu/kubeapps) that will be leveraged by TCE
Mustache
5
star
79

tanzu-plug-in-for-asdf

This ASDF plugin enables the download of Tanzu related tools from Github.
Shell
4
star
80

concourse-kpack-resource

Use a kpack image in a concourse pipeline naturally.
Go
4
star
81

net-operator-api

A client API for the Net Operator project, designed to allow for integration with vSphere 7 with Kubernetes
Go
4
star
82

carvel-guestbook-example-on-kubernetes

K8s guestbook example deployed with k14s tools
JavaScript
4
star
83

oss-httpd-build

This project is a schema to build Apache HTTP Server (httpd), along with a number of frequently updated library components (dependencies), on Linux or Windows. The results of this build are also distributed periodically to the general public from the https://network.tanzu.vmware.com/products/p-apache-http-server (login required)
PowerShell
4
star
84

carvel-release-scripts

contains scripts for releasing carvel tools
Shell
3
star
85

rotate-instance-identity-certificates

Tooling to rotate the Diego Instance Identity Certificates on Tanzu Application Service 2.4-2.6.
Go
3
star
86

build-image-action

A GitHub Action that can be used to call into a Tanzu Application Platform (TAP) installation and use Tanzu Build Service (TBS) to build an image from source.
Go
3
star
87

cartographer-catalog

Reusable Cartographer blueprints
Shell
2
star
88

homebrew-pinniped

Homebrew tap for Pinniped
Ruby
2
star
89

homebrew-kpack-cli

Homebrew Tap for kpack-cli (https://github.com/vmware-tanzu/kpack-cli)
Ruby
2
star
90

azure-log-analytics-nozzle-release

Go
2
star
91

vmotion-migration-tool-for-bosh-deployments

Tooling and instructions to seamlessly move a BOSH deployed Cloud Foundry installation to new vSphere hardware via vMotion. This tool allows you to live migrate without needing to take on an expensive and disruptive migration to a new platform just to support a hardware refresh.
Go
2
star
92

cartographer-site

Cartographer website
JavaScript
1
star
93

tanzu-observability-slug-generator

Java
1
star
94

asdf-carvel

k14s asdf plugin
Shell
1
star
95

package-for-source-controller

This repo will house the carvel tooling specific configuration and templating for a deployment of fluxcd-source-controller (https://github.com/fluxcd/source-controller) that will be leveraged by TCE
Shell
1
star
96

service-apis

Fork of kubernetes-sigs/service-apis
1
star