• Stars
    star
    2,008
  • Rank 22,063 (Top 0.5 %)
  • Language
    Go
  • License
    Apache License 2.0
  • Created over 6 years ago
  • Updated 11 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A tool to use AWS IAM credentials to authenticate to a Kubernetes cluster

AWS IAM Authenticator for Kubernetes

A tool to use AWS IAM credentials to authenticate to a Kubernetes cluster. The initial work on this tool was driven by Heptio. The project receives contributions from multiple community engineers and is currently maintained by Heptio and Amazon EKS OSS Engineers.

Why do I want this?

If you are an administrator running a Kubernetes cluster on AWS, you already need to manage AWS IAM credentials to provision and update the cluster. By using AWS IAM Authenticator for Kubernetes, you avoid having to manage a separate credential for Kubernetes access. AWS IAM also provides a number of nice properties such as an out of band audit trail (via CloudTrail) and 2FA/MFA enforcement.

If you are building a Kubernetes installer on AWS, AWS IAM Authenticator for Kubernetes can simplify your bootstrap process. You won't need to somehow smuggle your initial admin credential securely out of your newly installed cluster. Instead, you can create a dedicated KubernetesAdmin role at cluster provisioning time and set up Authenticator to allow cluster administrator logins.

How do I use it?

Assuming you have a cluster running in AWS and you want to add AWS IAM Authenticator for Kubernetes support, you need to:

  1. Create an IAM role you'll use to identify users.
  2. Run the Authenticator server as a DaemonSet.
  3. Configure your API server to talk to Authenticator.
  4. Set up kubectl to use Authenticator tokens.

1. Create an IAM role

First, you must create one or more IAM roles that will be mapped to users/groups inside your Kubernetes cluster. The easiest way to do this is to log into the AWS Console:

  • Choose the "Role for cross-account access" / "Provide access between AWS accounts you own" option.
  • Paste in your AWS account ID number (available in the top right in the console).
  • Your role does not need any additional policies attached.

This will create an IAM role with no permissions that can be assumed by authorized users/roles in your account. Note the Amazon Resource Name (ARN) of your role, which you will need below.

You can also do this in a single step using the AWS CLI instead of the AWS Console:

# get your account ID
ACCOUNT_ID=$(aws sts get-caller-identity --output text --query 'Account')

# define a role trust policy that opens the role to users in your account (limited by IAM policy)
POLICY=$(echo -n '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::'; echo -n "$ACCOUNT_ID"; echo -n ':root"},"Action":"sts:AssumeRole","Condition":{}}]}')

# create a role named KubernetesAdmin (will print the new role's ARN)
aws iam create-role \
  --role-name KubernetesAdmin \
  --description "Kubernetes administrator role (for AWS IAM Authenticator for Kubernetes)." \
  --assume-role-policy-document "$POLICY" \
  --output text \
  --query 'Role.Arn'

You can also skip this step and use:

  • An existing role (such as a cross-account access role).
  • An IAM user (see mapUsers below).
  • An EC2 instance or a federated role (see mapRoles below).

2. Run the server

The server is meant to run on each of your master nodes as a DaemonSet with host networking so it can expose a localhost port.

For a sample ConfigMap and DaemonSet configuration, see deploy/example.yaml.

(Optional) Pre-generate a certificate, key, and kubeconfig

If you're building an automated installer, you can also pre-generate the certificate, key, and webhook kubeconfig files easily using aws-iam-authenticator init. This command will generate files and place them in the configured output directories.

You can run this on each master node prior to starting the API server. You could also generate them before provisioning master nodes and install them in the appropriate host paths.

If you do not pre-generate files, aws-iam-authenticator server will generate them on demand. This works but requires that you restart your Kubernetes API server after installation.

3. Configure your API server to talk to the server

The Kubernetes API integrates with AWS IAM Authenticator for Kubernetes using a token authentication webhook. When you run aws-iam-authenticator server, it will generate a webhook configuration file and save it onto the host filesystem. You'll need to add a single additional flag to your API server configuration:

--authentication-token-webhook-config-file=/etc/kubernetes/aws-iam-authenticator/kubeconfig.yaml

On many clusters, the API server runs as a static pod. You can add the flag to /etc/kubernetes/manifests/kube-apiserver.yaml. Make sure the host directory /etc/kubernetes/aws-iam-authenticator/ is mounted into your API server pod. You may also need to restart the kubelet daemon on your master node to pick up the updated static pod definition:

systemctl restart kubelet.service

4. Create IAM role/user to kubernetes user/group mappings

The default behavior of the server is to source mappings exclusively from the mapUsers and mapRoles fields of its configuration file. See Full Configuration Format below for details.

Using the --backend-mode flag, you can configure the server to source mappings from two additional backends: an EKS-style ConfigMap (--backend-mode=EKSConfigMap) or IAMIdentityMapping custom resources (--backend-mode=CRD). The default backend, the server configuration file that's mounted by the server pod, corresponds to --backend-mode=MountedFile.

You can pass a comma-separated list of these backends to have the server search them in order. For example, with --backend-mode=EKSConfigMap,MountedFile, the server will search the EKS-style ConfigMap for mappings then, if it doesn't find a mapping for the given IAM role/user, the server configuration file. If a mapping for the same IAM role/user exists in multiple backends, the server will use the mapping in the backend that occurs first in the comma-separated list. In this example, if a mapping is found in the EKS ConfigMap then it will be used whether or not a duplicate or conflicting mapping exists in the server configuration file.

Note that when setting a single backend, the server will only source from that one and ignore the others even if they exist. For example, with --backend-mode=CRD, the server will only source from IAMIdentityMappings and ignore the mounted file and EKS ConfigMap.

MountedFile

This is the default backend of mappings and sufficient for most users. See Full Configuration Format below for details.

CRD (alpha)

This backend models each IAM mapping as an IAMIdentityMapping Kubernetes Custom Resource. This approach enables you to maintain mappings in a Kubernetes-native way using kubectl or the API. Plus, syntax errors (like misaligned YAML) can be more easily caught and won't affect all mappings.

To setup an IAMIdentityMapping CRD you'll first need to apply the CRD manifest:

kubectl apply -f deploy/iamidentitymapping.yaml

With the CRDs deployed you can then create Custom Resources which model your IAM Identities. See ./deploy/example-iamidentitymapping.yaml:

---
apiVersion: iamauthenticator.k8s.aws/v1alpha1
kind: IAMIdentityMapping
metadata:
  name: kubernetes-admin
spec:
  # Arn of the User or Role to be allowed to authenticate
  arn: arn:aws:iam::XXXXXXXXXXXX:user/KubernetesAdmin
  # Username that Kubernetes will see the user as, this is useful for setting
  # up allowed specific permissions for different users
  username: kubernetes-admin
  # Groups to be attached to your users/roles. For example `system:masters` to
  # create cluster admin, or `system:nodes`, `system:bootstrappers` for nodes to
  # access the API server.
  groups:
  - system:masters

EKSConfigMap

The EKS-style kube-system/aws-auth ConfigMap serves as the backend. The ConfigMap is expected to be in exactly the same format as in EKS clusters: https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html. This is useful if you're migrating from/to EKS and want to keep your mappings, or are running EKS in addition to some other AWS cluster(s) and want to have the same mappings in each.

DynamicFile

A local file specified by cfg.dynamicfilepath can serve as the backend. The file content is expected to be in exactly the same format as the EKSConfigMap. Whenever this file content changes, authenticator will automatically reload it. This provides more flexibility on managing the ARN mappings.

Check https://github.com/kubernetes-sigs/aws-iam-authenticator/blob/master/hack/dev/authenticator_with_dynamicfile_mode.yaml about how to configure the DynamicFile mode.

Run make e2e RUNNER=kind to play with a kind cluster with DynamicFile mode enable.

5. How to configure reservedPrefixConfig for Kubernetes usernames

The aws-iam-authenticator can support reserved prefix for k8s username. If the reserved prefix is set, then the username with the reserved prefix will not be authenticated with the error "username must not begin with with the following prefixes:".

Check https://github.com/kubernetes-sigs/aws-iam-authenticator/blob/master/hack/dev/authenticator_with_dynamicfile_mode.yaml about how to configure the reserved prefix.

6. Set up kubectl to use authentication tokens provided by AWS IAM Authenticator for Kubernetes

This requires a 1.10+ kubectl binary to work. If you receive Please enter Username: when trying to use kubectl you need to update to the latest kubectl

Finally, once the server is set up you'll want to authenticate. You will still need a kubeconfig that has the public data about your cluster (cluster CA certificate, endpoint address). The users section of your configuration, however, should include an exec section (refer to the v1.10 docs)::

# [...]
users:
- name: kubernetes-admin
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      command: aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - "REPLACE_ME_WITH_YOUR_CLUSTER_ID"
        - "-r"
        - "REPLACE_ME_WITH_YOUR_ROLE_ARN"
  # no client certificate/key needed here!

This means the kubeconfig is entirely public data and can be shared across all Authenticator users. It may make sense to upload it to a trusted public location such as AWS S3.

Make sure you have the aws-iam-authenticator binary installed. You can install it with go get -u -v sigs.k8s.io/aws-iam-authenticator/cmd/aws-iam-authenticator.

To authenticate, run kubectl --kubeconfig /path/to/kubeconfig" [...]. kubectl will exec the aws-iam-authenticator binary with the supplied params in your kubeconfig which will generate a token and pass it to the apiserver. The token is valid for 15 minutes (the shortest value AWS permits) and can be reused multiple times.

You can also specify session name when generating the token by including --session-name or -s parameter. This parameter cannot be used along with --forward-session-name.

You can also omit -r ROLE_ARN to sign the token with your existing credentials without assuming a dedicated role. This is useful if you want to authenticate as an IAM user directly or if you want to authenticate using an EC2 instance role or a federated role.

Kops Usage

Clusters managed by Kops can be configured to use Authenticator. For usage instructions see the Kops documentation.

How does it work?

It works using the AWS sts:GetCallerIdentity API endpoint. This endpoint returns information about whatever AWS IAM credentials you use to connect to it.

Client side (aws-iam-authenticator token)

We use this API in a somewhat unusual way by having the Authenticator client generate and pre-sign a request to the endpoint. We serialize that request into a token that can pass through the Kubernetes authentication system.

Server side (aws-iam-authenticator server)

The token is passed through the Kubernetes API server and into the Authenticator server's /authenticate endpoint via a webhook configuration. The Authenticator server validates all the parameters of the pre-signed request to make sure nothing looks funny. It then submits the request to the real https://sts.amazonaws.com server, which validates the client's HMAC signature and returns information about the user. Now that the server knows the AWS identity of the client, it translates this identity into a Kubernetes user and groups via a simple static mapping.

This mechanism is borrowed with a few changes from Vault.

What is a cluster ID?

The Authenticator cluster ID is a unique-per-cluster identifier that prevents certain replay attacks. Specifically, it prevents one Authenticator server (e.g., in a dev environment) from using a client's token to authenticate to another Authenticator server in another cluster.

The cluster ID does need to be unique per-cluster, but it doesn't need to be a secret. Some good choices are:

  • A random ID such as from openssl rand 16 -hex
  • The domain name of your Kubernetes API server

The Vault documentation also explains this attack (see X-Vault-AWS-IAM-Server-ID).

Specifying Credentials & Using AWS Profiles

Credentials can be specified for use with aws-iam-authenticator via any of the methods available to the AWS SDK for Go. This includes specifying AWS credentials with enviroment variables or by utilizing a credentials file.

AWS named profiles are supported by aws-iam-authenticator via the AWS_PROFILE environment variable. For example, to authenticate with credentials specified in the dev profile the AWS_PROFILE can be exported or specified explictly (e.g., AWS_PROFILE=dev kubectl get all). If no AWS_PROFILE is set, the default profile is used.

The AWS_PROFILE can also be specified directly in the kubeconfig file as part of the exec flow. For example, to specify that credentials from the dev named profile should always be used by aws-iam-authenticator, your kubeconfig would include an env key thats sets the profile:

apiVersion: v1
clusters:
- cluster:
    server: ${server}
    certificate-authority-data: ${cert}
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      command: aws-iam-authenticator
      env:
      - name: "AWS_PROFILE"
        value: "dev"
      args:
        - "token"
        - "-i"
        - "mycluster"

This method allows the appropriate profile to be used implicitly. Note that any environment variables set as part of the exec flow will take precedence over what's already set in your environment.

Note for federated users:

Federated AWS users often will have a "meaningful" attribute mapped onto their assumed role, such as an email address, through the account's AWS configuration. These assumed sessions have a few parts, the role id and caller-specified-role-name. By default, when a federated user uses the --role option of aws-iam-authenticator to assume a new role the caller-specified-role-name will be converted to a random token and the role id carries through to the newly assumed role.

Using aws-iam-authenticator token ... --forward-session-name will map the original caller-specified-role-name attribute onto the new STS assumed session. This can be helpful for quickly attempting to associate "who performed action X on the K8 cluster".

Please note, this should not be considered definitive and needs to be cross referenced via the role id (which remains consistent) with CloudTrail logs as a user could potentially change this on the client side.

API Authorization from Outside a Cluster

It is possible to make requests to the Kubernetes API from a client that is outside the cluster, be that using the bare Kubernetes REST API or from one of the language specific Kubernetes clients (e.g., Python). In order to do so, you must create a bearer token that is included with the request to the API. This bearer token requires you append the string k8s-aws-v1. with a base64 encoded string of a signed HTTP request to the STS GetCallerIdentity Query API. This is then sent it in the Authorization header of the request. Something to note though is that the IAM Authenticator explicitly omits base64 padding to avoid any = characters thus guaranteeing a string safe to use in URLs. Below is an example in Python on how this token would be constructed:

import base64
import boto3
import re
from botocore.signers import RequestSigner

def get_bearer_token(cluster_id, region):
    STS_TOKEN_EXPIRES_IN = 60
    session = boto3.session.Session()

    client = session.client('sts', region_name=region)
    service_id = client.meta.service_model.service_id

    signer = RequestSigner(
        service_id,
        region,
        'sts',
        'v4',
        session.get_credentials(),
        session.events
    )

    params = {
        'method': 'GET',
        'url': 'https://sts.{}.amazonaws.com/?Action=GetCallerIdentity&Version=2011-06-15'.format(region),
        'body': {},
        'headers': {
            'x-k8s-aws-id': cluster_id
        },
        'context': {}
    }

    signed_url = signer.generate_presigned_url(
        params,
        region_name=region,
        expires_in=STS_TOKEN_EXPIRES_IN,
        operation_name=''
    )

    base64_url = base64.urlsafe_b64encode(signed_url.encode('utf-8')).decode('utf-8')

    # remove any base64 encoding padding:
    return 'k8s-aws-v1.' + re.sub(r'=*', '', base64_url)

# If making a HTTP request you would create the authorization headers as follows:

headers = {'Authorization': 'Bearer ' + get_bearer_token('my_cluster', 'us-east-1')}

Troubleshooting

If your client fails with an error like could not get token: AccessDenied [...], you can try assuming the role with the AWS CLI directly:

# AWS CLI version of `aws-iam-authenticator token -r arn:aws:iam::ACCOUNT:role/ROLE`:
$ aws sts assume-role --role-arn arn:aws:iam::ACCOUNT:role/ROLE --role-session-name test

If that fails, there are a few possible problems to check for:

  • Make sure your base AWS credentials are available in your shell (aws sts get-caller-identity can help troubleshoot this).

  • Make sure the target role allows your source account access (in the role trust policy).

  • Make sure your source principal (user/role/group) has an IAM policy that allows sts:AssumeRole for the target role.

  • Make sure you don't have any explicit deny policies attached to your user, group, or in AWS Organizations that would prevent the sts:AssumeRole.

  • Try simulating the sts:AssumeRole call in the Policy Simulator.

Full Configuration Format

The client and server have the same configuration format. They can share the same exact configuration file, since there are no secrets stored in the configuration.

# a unique-per-cluster identifier to prevent replay attacks (see above)
clusterID: my-dev-cluster.example.com

# default IAM role to assume for `aws-iam-authenticator token`
defaultRole: arn:aws:iam::000000000000:role/KubernetesAdmin

# server listener configuration
server:
  # localhost port where the server will serve the /authenticate endpoint
  port: 21362 # (default)

  # state directory for generated TLS certificate and private keys
  stateDir: /var/aws-iam-authenticator # (default)

  # output `path` where a generated webhook kubeconfig will be stored.
  generateKubeconfig: /etc/kubernetes/aws-iam-authenticator.kubeconfig # (default)

  # role to assume before querying EC2 API in order to discover metadata like EC2 private DNS Name
  ec2DescribeInstancesRoleARN: arn:aws:iam::000000000000:role/DescribeInstancesRole

  # AWS Account IDs to scrub from server logs. (Defaults to empty list)
  scrubbedAccounts:
  - "111122223333"
  - "222233334444"

  # each mapRoles entry maps an IAM role to a username and set of groups
  # Each username and group can optionally contain template parameters:
  #  1) "{{AccountID}}" is the 12 digit AWS ID.
  #  2) "{{SessionName}}" is the role session name, with `@` characters
  #     transliterated to `-` characters.
  #  3) "{{SessionNameRaw}}" is the role session name, without character
  #     transliteration (available in version >= 0.5).
  mapRoles:
  # statically map arn:aws:iam::000000000000:role/KubernetesAdmin to cluster admin
  - rolearn: arn:aws:iam::000000000000:role/KubernetesAdmin
    username: kubernetes-admin
    groups:
    - system:masters

  # map EC2 instances in my "KubernetesNode" role to users like
  # "aws:000000000000:instance:i-0123456789abcdef0". Only use this if you
  # trust that the role can only be assumed by EC2 instances. If an IAM user
  # can assume this role directly (with sts:AssumeRole) they can control
  # SessionName.
  - rolearn: arn:aws:iam::000000000000:role/KubernetesNode
    username: aws:{{AccountID}}:instance:{{SessionName}}
    groups:
    - system:bootstrappers
    - aws:instances

  # map nodes that should conform to the username "system:node:<private-DNS>".  This
  # requires the authenticator to query the EC2 API in order to discover the private
  # DNS of the EC2 instance originating the authentication request.  Optionally, you
  # may specify a role that should be assumed before querying the EC2 API with the
  # key "server.ec2DescribeInstancesRoleARN" (see above).
  - rolearn: arn:aws:iam::000000000000:role/KubernetesNode
    username: system:node:{{EC2PrivateDNSName}}
    groups:
    - system:nodes
    - system:bootstrappers

  # map federated users in my "KubernetesAdmin" role to users like
  # "admin:alice-example.com". The SessionName is an arbitrary role name
  # like an e-mail address passed by the identity provider. Note that if this
  # role is assumed directly by an IAM User (not via federation), the user
  # can control the SessionName.
  - rolearn: arn:aws:iam::000000000000:role/KubernetesAdmin
    username: admin:{{SessionName}}
    groups:
    - system:masters

  # map federated users in my "KubernetesOtherAdmin" role to users like
  # "alice-example.com". The SessionName is an arbitrary role name
  # like an e-mail address passed by the identity provider. Note that if this
  # role is assumed directly by an IAM User (not via federation), the user
  # can control the SessionName.  Note that the "{{SessionName}}" macro is
  # quoted to ensure it is properly parsed as a string.
  - rolearn: arn:aws:iam::000000000000:role/KubernetesOtherAdmin
    username: "{{SessionName}}"
    groups:
    - system:masters

  # If unalterable identification of an IAM User is desirable, you can map against
  # AccessKeyID.
  - rolearn: arn:aws:iam::000000000000:role/KubernetesOtherAdmin
    username: "admin:{{AccessKeyID}}"
    groups:
    - system:masters

  # each mapUsers entry maps an IAM role to a static username and set of groups
  mapUsers:
  # map user IAM user Alice in 000000000000 to user "alice" in group "system:masters"
  - userarn: arn:aws:iam::000000000000:user/Alice
    username: alice
    groups:
    - system:masters

  # automatically map IAM ARN from these accounts to username.
  # NOTE: Always use quotes to avoid the account numbers being recognized as numbers
  # instead of strings by the yaml parser.
  mapAccounts:
  - "012345678901"
  - "456789012345"

  # source mappings from this file (mapUsers, mapRoles, & mapAccounts)
  backendMode:
  - MountedFile

Development

See the development page.

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on the community page.

You can reach the maintainers of this project at:

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

More Repositories

1

kubespray

Deploy a Production Ready Kubernetes Cluster
Jinja
14,679
star
2

kind

Kubernetes IN Docker - local clusters for testing Kubernetes
Go
12,623
star
3

kustomize

Customization of kubernetes YAML configurations
Go
10,363
star
4

kubebuilder

Kubebuilder - SDK for building Kubernetes APIs using CRDs
Go
7,298
star
5

external-dns

Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services
Go
6,672
star
6

krew

πŸ“¦ Find and install kubectl plugins
Go
6,009
star
7

metrics-server

Scalable and efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.
Go
4,761
star
8

aws-load-balancer-controller

A Kubernetes controller for Elastic Load Balancers
Go
3,703
star
9

descheduler

Descheduler for Kubernetes
Go
3,444
star
10

cluster-api

Home for Cluster API, a subproject of sig-cluster-lifecycle
Go
2,944
star
11

kui

A hybrid command-line/UI development experience for cloud-native development
TypeScript
2,701
star
12

nfs-subdir-external-provisioner

Dynamic sub-dir volume provisioner on a remote NFS server.
Shell
2,244
star
13

controller-runtime

Repo for the controller-runtime subproject of kubebuilder (sig-apimachinery)
Go
2,240
star
14

kwok

Kubernetes WithOut Kubelet - Simulates thousands of Nodes and Clusters.
Go
2,182
star
15

prometheus-adapter

An implementation of the custom.metrics.k8s.io API using Prometheus
Go
1,662
star
16

gateway-api

Repository for the next iteration of composite service (e.g. Ingress) and load balancing APIs.
Go
1,452
star
17

cri-tools

CLI and validation tools for Kubelet Container Runtime Interface (CRI) .
Go
1,333
star
18

secrets-store-csi-driver

Secrets Store CSI driver for Kubernetes secrets - Integrates secrets stores with Kubernetes via a CSI volume.
Go
1,139
star
19

kueue

Kubernetes-native Job Queueing
Go
986
star
20

sig-storage-local-static-provisioner

Static provisioner of local volumes
Go
973
star
21

scheduler-plugins

Repository for out-of-tree scheduler plugins based on scheduler framework.
Go
957
star
22

aws-ebs-csi-driver

CSI driver for Amazon EBS https://aws.amazon.com/ebs/
Go
883
star
23

apiserver-builder-alpha

apiserver-builder-alpha implements libraries and tools to quickly and easily build Kubernetes apiservers/controllers to support custom resource types based on APIServer Aggregation
Go
764
star
24

etcdadm

Go
748
star
25

kube-scheduler-simulator

The simulator for the Kubernetes scheduler
Go
706
star
26

aws-efs-csi-driver

CSI Driver for Amazon EFS https://aws.amazon.com/efs/
Go
668
star
27

controller-tools

Tools to use with the controller-runtime libraries
Go
655
star
28

krew-index

Plugin index for https://github.com/kubernetes-sigs/krew. This repo is for plugin maintainers.
624
star
29

security-profiles-operator

The Kubernetes Security Profiles Operator
C
622
star
30

node-feature-discovery

Node feature discovery for Kubernetes
Go
595
star
31

cluster-api-provider-aws

Kubernetes Cluster API Provider AWS provides consistent deployment and day 2 operations of "self-managed" and EKS Kubernetes clusters on AWS.
Go
592
star
32

hierarchical-namespaces

Home of the Hierarchical Namespace Controller (HNC). Adds hierarchical policies and delegated creation to Kubernetes namespaces for improved in-cluster multitenancy.
Go
532
star
33

cluster-proportional-autoscaler

Kubernetes Cluster Proportional Autoscaler Container
Go
519
star
34

sig-storage-lib-external-provisioner

Go
502
star
35

alibaba-cloud-csi-driver

CSI Plugin for Kubernetes, Support Alibaba Cloud EBS/NAS/OSS/CPFS/LVM.
Go
500
star
36

application

Application metadata descriptor CRD
Go
488
star
37

custom-metrics-apiserver

Framework for implementing custom metrics support for Kubernetes
Go
457
star
38

e2e-framework

A Go framework for end-to-end testing of components running in Kubernetes clusters.
Go
395
star
39

cluster-capacity

Cluster capacity analysis
Go
390
star
40

nfs-ganesha-server-and-external-provisioner

NFS Ganesha Server and Volume Provisioner.
Shell
384
star
41

apiserver-network-proxy

Go
344
star
42

cluster-api-provider-vsphere

Go
339
star
43

image-builder

Tools for building Kubernetes disk images
Shell
325
star
44

kubetest2

Kubetest2 is the framework for launching and running end-to-end tests on Kubernetes.
Go
312
star
45

cluster-api-provider-nested

Cluster API Provider for Nested Clusters
Go
289
star
46

cluster-api-provider-azure

Cluster API implementation for Microsoft Azure
Go
283
star
47

bom

A utility to generate SPDX-compliant Bill of Materials manifests
Go
279
star
48

vsphere-csi-driver

vSphere storage Container Storage Interface (CSI) plugin
Go
278
star
49

cluster-api-provider-openstack

Go
255
star
50

karpenter

Karpenter is a Kubernetes Node Autoscaler built for flexibility, performance, and simplicity.
Go
255
star
51

kubebuilder-declarative-pattern

A toolkit for building declarative operators with kubebuilder
Go
242
star
52

kpng

Reworking kube-proxy's architecture
Go
235
star
53

ingress2gateway

Convert Ingress resources to Gateway API resources
Go
225
star
54

cloud-provider-azure

Cloud provider for Azure
Go
222
star
55

blixt

Layer 4 Kubernetes load-balancer
Rust
220
star
56

aws-encryption-provider

APIServer encryption provider, backed by AWS KMS
Go
192
star
57

mcs-api

This repository hosts the Multi-Cluster Service APIs. Providers can import packages in this repo to ensure their multi-cluster service controller implementations will be compatible with MCS data planes.
Go
184
star
58

ip-masq-agent

Manage IP masquerade on nodes
Go
180
star
59

zeitgeist

Zeitgeist: the language-agnostic dependency checker
Go
168
star
60

cluster-api-provider-gcp

The GCP provider implementation for Cluster API
Go
165
star
61

contributor-playground

Dockerfile
163
star
62

cluster-addons

Addon operators for Kubernetes clusters.
Go
153
star
63

gcp-compute-persistent-disk-csi-driver

The Google Compute Engine Persistent Disk (GCE PD) Container Storage Interface (CSI) Storage Plugin.
Go
151
star
64

azurefile-csi-driver

Azure File CSI Driver
Go
145
star
65

promo-tools

Container and file artifact promotion tooling for the Kubernetes project
Go
136
star
66

cli-utils

This repo contains binaries that built from libraries in cli-runtime.
Go
134
star
67

azuredisk-csi-driver

Azure Disk CSI Driver
Go
132
star
68

kube-storage-version-migrator

Go
125
star
69

blob-csi-driver

Azure Blob Storage CSI driver
Go
116
star
70

usage-metrics-collector

High fidelity and scalable capacity and usage metrics for Kubernetes clusters
Go
116
star
71

aws-fsx-csi-driver

CSI Driver of Amazon FSx for Lustre https://aws.amazon.com/fsx/lustre/
Go
115
star
72

boskos

Boskos is a resource management service that provides reservation and lifecycle management of a variety of different kinds of resources.
Go
113
star
73

downloadkubernetes

Download kubernetes binaries more easily
Go
110
star
74

sig-windows-tools

Repository for tools and artifacts related to the sig-windows charter in Kubernetes. Scripts to assist kubeadm and wincat and flannel will be hosted here.
PowerShell
108
star
75

cluster-api-operator

Home for Cluster API Operator, a subproject of sig-cluster-lifecycle
Go
107
star
76

cluster-api-provider-digitalocean

The DigitalOcean provider implementation of the Cluster Management API
Go
106
star
77

cluster-api-provider-kubevirt

Cluster API Provider for KubeVirt
Go
96
star
78

cluster-api-provider-packet

Cluster API Provider Packet (now Equinix Metal)
Go
94
star
79

structured-merge-diff

Test cases and implementation for "server-side apply"
Go
92
star
80

slack-infra

Tooling for kubernetes.slack.com
Go
90
star
81

dashboard-metrics-scraper

Container to scrape, store, and retrieve a window of time from the Metrics Server.
Go
84
star
82

apiserver-runtime

Libraries for implementing aggregated apiservers
Go
81
star
83

cli-experimental

Experimental Kubectl libraries and commands.
Go
79
star
84

lwkd

Last Week in Kubernetes Development
HTML
78
star
85

gcp-filestore-csi-driver

The Google Cloud Filestore Container Storage Interface (CSI) Plugin.
Go
78
star
86

kube-scheduler-wasm-extension

All the things to make the scheduler extendable with wasm.
Go
77
star
87

container-object-storage-interface-controller

Container Object Storage Interface (COSI) controller responsible to manage lifecycle of COSI objects.
Go
74
star
88

jobset

JobSet: An API for managing a group of Jobs as a unit
Go
73
star
89

sig-windows-dev-tools

This is a batteries included local development environment for Kubernetes on Windows.
PowerShell
73
star
90

cluster-api-addon-provider-helm

Cluster API Add-on Provider for Helm is a extends the functionality of Cluster API by providing a solution for managing the installation, configuration, upgrade, and deletion of Cluster add-ons using Helm charts.
Go
70
star
91

cloud-provider-equinix-metal

Kubernetes Cloud Provider for Equinix Metal (formerly Packet Cloud Controller Manager)
Go
70
star
92

kernel-module-management

The kernel module management operator builds, signs and loads kernel modules in Kubernetes clusters..
Go
70
star
93

reference-docs

Tools to build reference documentation for Kubernetes APIs and CLIs.
HTML
69
star
94

cluster-api-provider-ibmcloud

Cluster API Provider for IBM Cloud
Go
59
star
95

community-images

kubectl plugin that displays images running in a Kubernetes cluster that were pulled from community owned repositories and warn the user to switch repositories if needed
Go
58
star
96

wg-policy-prototypes

A place for policy work group related proposals and prototypes.
Go
58
star
97

container-object-storage-interface-spec

Container Object Storage (COSI) Specification
Shell
57
star
98

container-object-storage-interface-api

Container Object Storage Interface (COSI) API responsible to define API for COSI objects.
Go
55
star
99

lws

LeaderWorkerSet: An API for deploying a group of pods as a unit of replication
Go
55
star
100

kubectl-validate

Go
54
star