• Stars
    star
    390
  • Rank 110,226 (Top 3 %)
  • Language
    Go
  • License
    Apache License 2.0
  • Created about 8 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Cluster capacity analysis

Cluster capacity analysis framework

Build Status

Implementation of cluster capacity analysis.

Intro

As new pods get scheduled on nodes in a cluster, more resources get consumed. Monitoring available resources in the cluster is very important as operators can increase the current resources in time before all of them get exhausted. Or, carry different steps that lead to increase of available resources.

Cluster capacity consists of capacities of individual cluster nodes. Capacity covers CPU, memory, disk space and other resources.

Overall remaining allocatable capacity is a rough estimation since it does not assume all resources being distributed among nodes. Goal is to analyze remaining allocatable resources and estimate available capacity that is still consumable in terms of a number of instances of a pod with given requirements that can be scheduled in a cluster.

Build and Run

Build the framework:

$ cd $GOPATH/src/sigs.k8s.io
$ git clone https://github.com/kubernetes-sigs/cluster-capacity
$ cd cluster-capacity
$ make build

and run the analysis:

$ ./cluster-capacity --kubeconfig <path to kubeconfig> --podspec=examples/pod.yaml

For more information about available options run:

$ ./cluster-capacity --help

Demonstration

Assuming a cluster is running with 4 nodes and 1 master with each node with 2 CPUs and 4GB of memory. With pod resource requirements to be 150m of CPU and 100Mi of Memory.

$ ./cluster-capacity --kubeconfig <path to kubeconfig> --podspec=pod.yaml --verbose
Pod requirements:
	- cpu: 150m
	- memory: 100Mi

The cluster can schedule 52 instance(s) of the pod.
Termination reason: FailedScheduling: pod (small-pod-52) failed to fit in any node
fit failure on node (kube-node-1): Insufficient cpu
fit failure on node (kube-node-4): Insufficient cpu
fit failure on node (kube-node-2): Insufficient cpu
fit failure on node (kube-node-3): Insufficient cpu


Pod distribution among nodes:
	- kube-node-1: 13 instance(s)
	- kube-node-4: 13 instance(s)
	- kube-node-2: 13 instance(s)
	- kube-node-3: 13 instance(s)

To decrease available resources in the cluster you can use provided RC (examples/rc.yml):

$ kubectl create -f examples/rc.yml

E.g. to change a number of replicas to 6, you can run:

$ kubectl patch -f examples/rc.yml -p '{"spec":{"replicas":6}}'

Once the number of running pods in the cluster grows and the analysis is run again, the number of schedulable pods decreases as well:

$ ./cluster-capacity --kubeconfig <path to kubeconfig> --podspec=pod.yaml --verbose
Pod requirements:
	- cpu: 150m
	- memory: 100Mi

The cluster can schedule 46 instance(s) of the pod.
Termination reason: FailedScheduling: pod (small-pod-46) failed to fit in any node
fit failure on node (kube-node-1): Insufficient cpu
fit failure on node (kube-node-4): Insufficient cpu
fit failure on node (kube-node-2): Insufficient cpu
fit failure on node (kube-node-3): Insufficient cpu


Pod distribution among nodes:
	- kube-node-1: 11 instance(s)
	- kube-node-4: 12 instance(s)
	- kube-node-2: 11 instance(s)
	- kube-node-3: 12 instance(s)

Output format

cluster capacity command has a flag --output (-o) to format its output as json or yaml.

$ ./cluster-capacity --kubeconfig <path to kubeconfig> --podspec=pod.yaml -o json
$ ./cluster-capacity --kubeconfig <path to kubeconfig> --podspec=pod.yaml -o yaml

The json or yaml output is not versioned and is not guaranteed to be stable across various releases.

Running Cluster Capacity as a Job Inside of a Pod

Running the cluster capacity tool as a job inside of a pod has the advantage of being able to be run multiple times without needing user intervention.

Follow these example steps to run Cluster Capacity as a job:

1. Create a Container that runs Cluster Capacity

In this example we create a simple Docker image utilizing the Dockerfile found in the root directory and tag it with cluster-capacity-image:

$ docker build -t cluster-capacity-image .

2. Setup an authorized user with the necessary permissions

$ kubectl apply -f config/rbac.yaml

3. Define and create the pod specification (pod.yaml):

apiVersion: v1
kind: Pod
metadata:
  name: small-pod
  labels:
    app: guestbook
    tier: frontend
spec:
  containers:
  - name: php-redis
    image: gcr.io/google-samples/gb-frontend:v4
    imagePullPolicy: Always
    resources:
      limits:
        cpu: 150m
        memory: 100Mi
      requests:
        cpu: 150m
        memory: 100Mi

The cluster capacity analysis is mounted in a volume using a ConfigMap named cluster-capacity-configmap to mount input pod spec file pod.yaml into a volume test-volume at the path /test-pod.

$ kubectl create configmap cluster-capacity-configmap \
    --from-file pod.yaml

4. Create the job specification (cluster-capacity-job.yaml):

apiVersion: batch/v1
kind: Job
metadata:
  name: cluster-capacity-job
spec:
  parallelism: 1
  completions: 1
  template:
    metadata:
      name: cluster-capacity-pod
    spec:
        containers:
        - name: cluster-capacity
          image: cluster-capacity-image
          imagePullPolicy: "Never"
          volumeMounts:
          - mountPath: /test-pod
            name: test-volume
          env:
          - name: CC_INCLUSTER
            value: "true"
          command:
          - "/bin/sh"
          - "-ec"
          - |
            /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose
        restartPolicy: "Never"
        serviceAccountName: cluster-capacity-sa
        volumes:
        - name: test-volume
          configMap:
            name: cluster-capacity-configmap

Note the environment variable CC_INCLUSTER the example above is required. This is used to indicate to the cluster capacity tool that it is running inside a cluster as a pod.

The pod.yaml key of the ConfigMap is the same as the pod specification file name, though it is not required. By doing this, the input pod spec file can be accessed inside the pod as /test-pod/pod.yaml.

5. Run the cluster capacity image as a job in a pod:

$ kubectl create -f cluster-capacity-job.yaml

6. Check the job logs to find the number of pods that can be scheduled in the cluster:

$ kubectl logs jobs/cluster-capacity-job
small-pod pod requirements:
        - CPU: 150m
        - Memory: 100Mi

The cluster can schedule 52 instance(s) of the pod small-pod.

Termination reason: Unschedulable: No nodes are available that match all of the
following predicates:: Insufficient cpu (2).

Pod distribution among nodes:
small-pod
        - 192.168.124.214: 26 instance(s)
        - 192.168.124.120: 26 instance(s)

Pod spec generator: genpod

genpod is an internal tool to cluster capacity, and could be used to create sample pod spec. In general, users are recommended to provide their own pod spec file as part of analysis

As pods are part of a namespace with resource limits and additional constraints (e.g. node selector forced by namespace annotation), it is natural to analyse how many instances of a pod with maximal resource requirements can be scheduled. In order to generate the pod spec, you can run:

$ genpod --kubeconfig <path to kubeconfig>  --namespace <namespace>

Assuming at least one resource limits object is available with at least one maximum resource type per pod. If multiple resource limits objects per namespace are available, minimum of all maximum resources per type is taken. If a namespace is annotated with openshift.io/node-selector, the selector is set as pod's node selector.

Example:

Assuming cluster-capacity namespace with openshift.io/node-selector: "region=hpc,load=high" annotation and resource limits are created (see examples/namespace.yml and examples/limits.yml)

$ kubectl describe limits hpclimits --namespace cluster-capacity
Name:           hpclimits
Namespace:      cluster-capacity
Type            Resource        Min     Max     Default Request Default Limit   Max Limit/Request Ratio
----            --------        ---     ---     --------------- -------------   -----------------------
Pod             cpu             10m     200m    -               -               -
Pod             memory          6Mi     100Mi   -               -               -
Container       memory          6Mi     20Mi    6Mi             6Mi             -
Container       cpu             10m     50m     10m             10m             -
$ genpod --kubeconfig <path to kubeconfig>  --namespace cluster-capacity
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  name: cluster-capacity-stub-container
  namespace: cluster-capacity
spec:
  containers:
  - image: gcr.io/google_containers/pause:2.0
    imagePullPolicy: Always
    name: cluster-capacity-stub-container
    resources:
      limits:
        cpu: 200m
        memory: 100Mi
      requests:
        cpu: 200m
        memory: 100Mi
  dnsPolicy: Default
  nodeSelector:
    load: high
    region: hpc
  restartPolicy: OnFailure
status: {}

Roadmap

Underway:

  • analysis covering scheduler and admission controller
  • generic framework for any scheduler created by the default scheduler factory
  • continuous stream of estimations

Would like to get soon:

  • include multiple schedulers
  • accept a list (sequence) of pods
  • extend analysis with volume handling
  • define common interface each scheduler need to implement if embedded in the framework

Other possibilities:

  • incorporate re-scheduler
  • incorporate preemptive scheduling
  • include more of Kubelet's behaviour (e.g. recognize memory pressure, secrets/configmap existence test)

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on the community page.

You can reach the maintainers of this project at:

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

More Repositories

1

kubespray

Deploy a Production Ready Kubernetes Cluster
Jinja
14,679
star
2

kind

Kubernetes IN Docker - local clusters for testing Kubernetes
Go
13,222
star
3

kustomize

Customization of kubernetes YAML configurations
Go
10,363
star
4

kubebuilder

Kubebuilder - SDK for building Kubernetes APIs using CRDs
Go
7,716
star
5

external-dns

Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services
Go
6,672
star
6

krew

๐Ÿ“ฆ Find and install kubectl plugins
Go
6,132
star
7

metrics-server

Scalable and efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.
Go
4,761
star
8

aws-load-balancer-controller

A Kubernetes controller for Elastic Load Balancers
Go
3,921
star
9

descheduler

Descheduler for Kubernetes
Go
3,444
star
10

cluster-api

Home for Cluster API, a subproject of sig-cluster-lifecycle
Go
2,944
star
11

kui

A hybrid command-line/UI development experience for cloud-native development
TypeScript
2,746
star
12

nfs-subdir-external-provisioner

Dynamic sub-dir volume provisioner on a remote NFS server.
Shell
2,378
star
13

kwok

Kubernetes WithOut Kubelet - Simulates thousands of Nodes and Clusters.
Go
2,304
star
14

controller-runtime

Repo for the controller-runtime subproject of kubebuilder (sig-apimachinery)
Go
2,240
star
15

aws-iam-authenticator

A tool to use AWS IAM credentials to authenticate to a Kubernetes cluster
Go
2,008
star
16

prometheus-adapter

An implementation of the custom.metrics.k8s.io API using Prometheus
Go
1,662
star
17

gateway-api

Repository for the next iteration of composite service (e.g. Ingress) and load balancing APIs.
Go
1,582
star
18

cri-tools

CLI and validation tools for Kubelet Container Runtime Interface (CRI) .
Go
1,333
star
19

secrets-store-csi-driver

Secrets Store CSI driver for Kubernetes secrets - Integrates secrets stores with Kubernetes via a CSI volume.
Go
1,177
star
20

kueue

Kubernetes-native Job Queueing
Go
1,144
star
21

scheduler-plugins

Repository for out-of-tree scheduler plugins based on scheduler framework.
Go
1,015
star
22

sig-storage-local-static-provisioner

Static provisioner of local volumes
Go
1,009
star
23

aws-ebs-csi-driver

CSI driver for Amazon EBS https://aws.amazon.com/ebs/
Go
923
star
24

apiserver-builder-alpha

apiserver-builder-alpha implements libraries and tools to quickly and easily build Kubernetes apiservers/controllers to support custom resource types based on APIServer Aggregation
Go
787
star
25

etcdadm

Go
758
star
26

kube-scheduler-simulator

The simulator for the Kubernetes scheduler
Go
715
star
27

aws-efs-csi-driver

CSI Driver for Amazon EFS https://aws.amazon.com/efs/
Go
683
star
28

controller-tools

Tools to use with the controller-runtime libraries
Go
682
star
29

security-profiles-operator

The Kubernetes Security Profiles Operator
C
649
star
30

krew-index

Plugin index for https://github.com/kubernetes-sigs/krew. This repo is for plugin maintainers.
628
star
31

cluster-api-provider-aws

Kubernetes Cluster API Provider AWS provides consistent deployment and day 2 operations of "self-managed" and EKS Kubernetes clusters on AWS.
Go
618
star
32

node-feature-discovery

Node feature discovery for Kubernetes
Go
595
star
33

hierarchical-namespaces

Home of the Hierarchical Namespace Controller (HNC). Adds hierarchical policies and delegated creation to Kubernetes namespaces for improved in-cluster multitenancy.
Go
583
star
34

cluster-proportional-autoscaler

Kubernetes Cluster Proportional Autoscaler Container
Go
519
star
35

sig-storage-lib-external-provisioner

Go
515
star
36

alibaba-cloud-csi-driver

CSI Plugin for Kubernetes, Support Alibaba Cloud EBS/NAS/OSS/CPFS
Go
511
star
37

application

Application metadata descriptor CRD
Go
488
star
38

custom-metrics-apiserver

Framework for implementing custom metrics support for Kubernetes
Go
457
star
39

e2e-framework

A Go framework for end-to-end testing of components running in Kubernetes clusters.
Go
439
star
40

nfs-ganesha-server-and-external-provisioner

NFS Ganesha Server and Volume Provisioner.
Shell
399
star
41

karpenter

Karpenter is a Kubernetes Node Autoscaler built for flexibility, performance, and simplicity.
Go
356
star
42

cluster-api-provider-vsphere

Go
349
star
43

apiserver-network-proxy

Go
349
star
44

image-builder

Tools for building Kubernetes disk images
Shell
344
star
45

kubetest2

Kubetest2 is the framework for launching and running end-to-end tests on Kubernetes.
Go
323
star
46

ingress2gateway

Convert Ingress resources to Gateway API resources
Go
301
star
47

bom

A utility to generate SPDX-compliant Bill of Materials manifests
Go
300
star
48

cluster-api-provider-nested

Cluster API Provider for Nested Clusters
Go
294
star
49

vsphere-csi-driver

vSphere storage Container Storage Interface (CSI) plugin
Go
289
star
50

cluster-api-provider-azure

Cluster API implementation for Microsoft Azure
Go
283
star
51

blixt

Layer 4 Kubernetes load-balancer
Rust
268
star
52

cluster-api-provider-openstack

Go
266
star
53

kubebuilder-declarative-pattern

A toolkit for building declarative operators with kubebuilder
Go
248
star
54

kpng

Reworking kube-proxy's architecture
Go
240
star
55

cloud-provider-azure

Cloud provider for Azure
Go
222
star
56

aws-encryption-provider

APIServer encryption provider, backed by AWS KMS
Go
192
star
57

mcs-api

This repository hosts the Multi-Cluster Service APIs. Providers can import packages in this repo to ensure their multi-cluster service controller implementations will be compatible with MCS data planes.
Go
187
star
58

ip-masq-agent

Manage IP masquerade on nodes
Go
180
star
59

zeitgeist

Zeitgeist: the language-agnostic dependency checker
Go
171
star
60

contributor-playground

Dockerfile
171
star
61

cluster-api-provider-gcp

The GCP provider implementation for Cluster API
Go
168
star
62

cluster-addons

Addon operators for Kubernetes clusters.
Go
156
star
63

azurefile-csi-driver

Azure File CSI Driver
Go
155
star
64

gcp-compute-persistent-disk-csi-driver

The Google Compute Engine Persistent Disk (GCE PD) Container Storage Interface (CSI) Storage Plugin.
Go
151
star
65

cli-utils

This repo contains binaries that built from libraries in cli-runtime.
Go
147
star
66

azuredisk-csi-driver

Azure Disk CSI Driver
Go
145
star
67

promo-tools

Container and file artifact promotion tooling for the Kubernetes project
Go
138
star
68

cluster-api-operator

Home for Cluster API Operator, a subproject of sig-cluster-lifecycle
Go
134
star
69

kube-storage-version-migrator

Go
125
star
70

lws

LeaderWorkerSet: An API for deploying a group of pods as a unit of replication
Go
124
star
71

blob-csi-driver

Azure Blob Storage CSI driver
Go
123
star
72

aws-fsx-csi-driver

CSI Driver of Amazon FSx for Lustre https://aws.amazon.com/fsx/lustre/
Go
118
star
73

usage-metrics-collector

High fidelity and scalable capacity and usage metrics for Kubernetes clusters
Go
117
star
74

boskos

Boskos is a resource management service that provides reservation and lifecycle management of a variety of different kinds of resources.
Go
117
star
75

sig-windows-tools

Repository for tools and artifacts related to the sig-windows charter in Kubernetes. Scripts to assist kubeadm and wincat and flannel will be hosted here.
PowerShell
117
star
76

downloadkubernetes

Download kubernetes binaries more easily
Go
115
star
77

cluster-api-provider-digitalocean

The DigitalOcean provider implementation of the Cluster Management API
Go
108
star
78

cluster-api-provider-kubevirt

Cluster API Provider for KubeVirt
Go
103
star
79

kubectl-validate

Go
103
star
80

jobset

JobSet: An API for managing a group of Jobs as a unit
Go
97
star
81

cluster-api-provider-packet

Cluster API Provider Packet (now Equinix Metal)
Go
94
star
82

structured-merge-diff

Test cases and implementation for "server-side apply"
Go
92
star
83

slack-infra

Tooling for kubernetes.slack.com
Go
90
star
84

cluster-api-addon-provider-helm

Cluster API Add-on Provider for Helm is a extends the functionality of Cluster API by providing a solution for managing the installation, configuration, upgrade, and deletion of Cluster add-ons using Helm charts.
Go
85
star
85

dashboard-metrics-scraper

Container to scrape, store, and retrieve a window of time from the Metrics Server.
Go
84
star
86

apiserver-runtime

Libraries for implementing aggregated apiservers
Go
83
star
87

kube-scheduler-wasm-extension

All the things to make the scheduler extendable with wasm.
Go
83
star
88

container-object-storage-interface-controller

Container Object Storage Interface (COSI) controller responsible to manage lifecycle of COSI objects.
Go
83
star
89

cli-experimental

Experimental Kubectl libraries and commands.
Go
82
star
90

gcp-filestore-csi-driver

The Google Cloud Filestore Container Storage Interface (CSI) Plugin.
Go
82
star
91

lwkd

Last Week in Kubernetes Development
HTML
78
star
92

sig-windows-dev-tools

This is a batteries included local development environment for Kubernetes on Windows.
PowerShell
77
star
93

cloud-provider-kind

Cloud provider for KIND clusters
Go
75
star
94

kernel-module-management

The kernel module management operator builds, signs and loads kernel modules in Kubernetes clusters.
Go
75
star
95

cloud-provider-equinix-metal

Kubernetes Cloud Provider for Equinix Metal (formerly Packet Cloud Controller Manager)
Go
71
star
96

reference-docs

Tools to build reference documentation for Kubernetes APIs and CLIs.
HTML
69
star
97

hydrophone

Hydrophone is a lightweight Kubernetes conformance tests runner
Go
63
star
98

community-images

kubectl plugin that displays images running in a Kubernetes cluster that were pulled from community owned repositories and warn the user to switch repositories if needed
Go
61
star
99

wg-policy-prototypes

A place for policy work group related proposals and prototypes.
Go
60
star
100

cluster-api-ipam-provider-in-cluster

An IPAM provider for Cluster API that manages pools of IP addresses using Kubernetes resources.
Go
59
star