• Stars
    star
    421
  • Rank 102,977 (Top 3 %)
  • Language
    Go
  • License
    MIT License
  • Created almost 2 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

kuik is a container image caching system for Kubernetes

kube-image-keeper (kuik)

Releases Go report card MIT license Brought to you by Enix

kube-image-keeper (a.k.a. kuik, which is pronounced /kwɪk/, like "quick") is a container image caching system for Kubernetes. It saves the container images used by your pods in its own local registry so that these images remain available if the original becomes unavailable.

Why and when is it useful?

At Enix, we manage production Kubernetes clusters both for our internal use and for various customers; sometimes on premises, sometimes in various clouds, public or private. We regularly run into image availability issues, for instance:

  • the registry is unavailable or slow;
  • a critical image was deleted from the registry (by accident or because of a misconfigured retention policy),
  • the registry has pull quotas (or other rate-limiting mechanisms) and temporarily won't let us pull more images.

(The last point is a well-known challenge when pulling lots of images from the Docker Hub, and becomes particularly painful when private Kubernetes nodes access the registry through a single NAT gateway!)

We needed a solution that would:

  • work across a wide range of Kubernetes versions, container engines, and image registries,
  • preserve Kubernetes' out-of-the-box image caching behavior and image pull policies,
  • have fairly minimal requirements,
  • and be easy and quick to install.

We investigated other options, and we didn't find any that would quite fit our requirements, so we wrote kuik instead.

Prerequisites

  • A Kubernetes cluster¹ (duh!)
  • Admin permissions²
  • cert-manager³
  • Helm⁴ >= 3.2.0
  • CNI plugin with port-mapper⁵ enabled
  • In a production environment, we definitely recommend that you use persistent⁶ storage

¹A local development cluster like minikube or KinD is fine.
²In addition to its own pods, kuik needs to register a MutatingWebhookConfiguration.
³kuik uses cert-manager to issue and configure its webhook certificate. You don't need to configure cert-manager in a particular way (you don't even need to create an Issuer or ClusterIssuer). It's alright to just kubectl apply the YAML as shown in the cert-manager installation instructions.
⁴If you prefer to install with "plain" YAML manifests, we'll tell you how to generate these manifests.
⁵Most CNI plugins these days enable port-mapper out of the box, so this shouldn't be an issue, but we're mentioning it just in case.
⁶You can use kuik without persistence, but if the pod running the registry gets deleted, you will lose your cached images. They will be automatically pulled again when needed, though.

Supported Kubernetes versions

kuik has been developed for, and tested with, Kubernetes 1.21 to 1.24; but the code doesn't use any deprecated (or new) feature or API, and should work with newer versions as well. (Community users have reported success with Kubernetes 1.26).

How it works

When a pod is created, kuik's mutating webhook rewrites its images on the fly, adding a localhost:{port}/ prefix (the port is 7439 by default, and is configurable).

On localhost:{port}, there is an image proxy that serves images from kuik's caching registry (when the images have been cached) or directly from the original registry (when the images haven't been cached yet).

One controller watches pods, and when it notices new images, it creates CachedImage custom resources for these images.

Another controller watches these CachedImage custom resources, and copies images from source registries to kuik's caching registry accordingly.

Here is what our images look like when using kuik:

$ kubectl get pods -o custom-columns=NAME:metadata.name,IMAGES:spec.containers[*].image
NAME                   IMAGES
debugger               localhost:7439/registrish.s3.amazonaws.com/alpine
factori-0              localhost:7439/factoriotools/factorio:1.1
nvidiactk-b5f7m        localhost:7439/nvcr.io/nvidia/k8s/container-toolkit:v1.12.0-ubuntu20.04
sshd-8b8c6cfb6-l2tc9   localhost:7439/ghcr.io/jpetazzo/shpod
web-8667899c97-2v88h   localhost:7439/nginx
web-8667899c97-89j2h   localhost:7439/nginx
web-8667899c97-fl54b   localhost:7439/nginx

The kuik controllers keep track of how many pods use a given image. When an image isn't used anymore, it is flagged for deletion, and removed one month later. This expiration delay can be configured. You can see kuik's view of your images by looking at the CachedImages custom resource:

$ kubectl get cachedimages
NAME                                                       CACHED   EXPIRES AT             PODS COUNT   AGE
docker.io-dockercoins-hasher-v0.1                          true     2023-03-07T10:50:14Z                36m
docker.io-factoriotools-factorio-1.1                       true                            1            4m1s
docker.io-jpetazzo-shpod-latest                            true     2023-03-07T10:53:57Z                9m18s
docker.io-library-nginx-latest                             true                            3            36m
ghcr.io-jpetazzo-shpod-latest                              true                            1            36m
nvcr.io-nvidia-k8s-container-toolkit-v1.12.0-ubuntu20.04   true                            1            29m
registrish.s3.amazonaws.com-alpine-latest                                                  1            35m

Architecture and components

In kuik's namespace, you will find:

  • a Deployment to run kuik's controllers,
  • a DaemonSet to run kuik's image proxy,
  • a StatefulSet to run kuik's image cache, a Deployment is used instead when this component runs in HA mode.

The image cache will obviously require a bit of disk space to run (see Garbage collection and limitations below). Otherwise, kuik's components are fairly lightweight in terms of compute resources. This shows CPU and RAM usage with the default setup, featuring two controllers in HA mode:

$ kubectl top pods
NAME                                             CPU(cores)   MEMORY(bytes)
kube-image-keeper-0                              1m           86Mi
kube-image-keeper-controllers-5b5cc9fcc6-bv6cp   1m           16Mi
kube-image-keeper-controllers-5b5cc9fcc6-tjl7t   3m           24Mi
kube-image-keeper-proxy-54lzk                    1m           19Mi

Architecture

Metrics

Refer to the dedicated documentation.

Installation

  1. Make sure that you have cert-manager installed. If not, check its installation page (it's fine to use the kubectl apply one-liner, and no further configuration is required).
  2. Install kuik's Helm chart from the enix/helm-charts repository:
helm upgrade --install \
     --create-namespace --namespace kuik-system \
     kube-image-keeper kube-image-keeper \
     --repo https://charts.enix.io/

That's it!

Installation with plain YAML files

You can use Helm to generate plain YAML files and then deploy these YAML files with kubectl apply or whatever you want:

helm template --namespace kuik-system \
     kube-image-keeper kube-image-keeper \
     --repo https://charts.enix.io/ \
     > /tmp/kuik.yaml
kubectl create namespace kuik-system
kubectl apply -f /tmp/kuik.yaml --namespace kuik-system

Configuration and customization

If you want to change e.g. the expiration delay, the port number used by the proxy, enable persistence (with a PVC) for the registry cache... You can do that with standard Helm values.

You can see the full list of parameters (along with their meaning and default values) in the chart's values.yaml file, or on kuik's page on the Artifact Hub.

For instance, to extend the expiration delay to 3 months (90 days), you can deploy kuik like this:

helm upgrade --install \
     --create-namespace --namespace kuik-system \
     kube-image-keeper kube-image-keeper \
     --repo https://charts.enix.io/ \
     --set cachedImagesExpiryDelay=90

Advanced usage

Pod filtering

There are 3 ways to tell kuik which pods it should manage (or, conversely, which ones it should ignore).

  • If a pod has the label kube-image-keeper.enix.io/image-caching-policy=ignore, kuik will ignore the pod (it will not rewrite its image references).
  • If a pod is in an ignored Namespace, it will also be ignored. Namespaces can be ignored by setting the Helm value controllers.webhook.ignoredNamespaces, which defaults to [kube-system]. (Note: this feature relies on the NamespaceDefaultLabelName feature gate to work.)
  • Finally, kuik will only work on pods matching a specific selector. By default, the selector is empty, which means "match all the pods". The selector can be set with the Helm value controllers.webhook.objectSelector.matchExpressions.

This logic isn't implemented by the kuik controllers or webhook directly, but through Kubernetes' standard webhook object selectors. In other words, these parameters end up in the MutatingWebhookConfiguration template to filter which pods get presented to kuik's webhook. When the webhook rewrites the images for a pod, it adds a label to that pod, and the kuik controllers then rely on that label to know which CachedImages resources to create.

Keep in mind that kuik will ignore pods scheduled into its own namespace.

Cache persistence & garbage collection

Persistence is disabled by default. You can enable it by setting the Helm value registry.persistence.enabled=true. This will create a PersistentVolumeClaim with a default size of 20 GiB. You can change that size by setting the value registry.persistence.size. Keep in mind that enabling persistence isn't enough to provide high availability of the registry! If you want kuik to be highly available, please refer to the high availability guide.

Note that persistence requires your cluster to have some PersistentVolumes. If you don't have PersistentVolumes, kuik's registry Pod will remain Pending and your images won't be cached (but they will still be served transparently by kuik's image proxy).

Garbage collection and limitations

When a CachedImage expires because it is not used anymore by the cluster, the image is deleted from the registry. However, since kuik uses Docker's registry, this only deletes reference files like tags. It doesn't delete blobs, which account for most of the used disk space. Garbage collection allows removing those blobs and free up space. The garbage collecting job can be configured to run thanks to the registry.garbageCollectionSchedule configuration in a cron-like format. It is disabled by default, because running garbage collection without persistence would just wipe out the cache registry.

Garbage collection can only run when the registry is read-only (or stopped), otherwise image corruption may happen. (This is described in the registry documentation.) Before running garbage collection, kuik stops the registry. During that time, all image pulls are automatically proxified to the source registry so that garbage collection is mostly transparent for cluster nodes.

Reminder: since garbage collection recreates the cache registry pod, if you run garbage collection without persistence, this will wipe out the cache registry. It is not recommended for production setups!

Currently, if the cache gets deleted, the status.isCached field of CachedImages isn't updated automatically, which means that kubectl get cachedimages will incorrectly report that images are cached. However, you can trigger a controller reconciliation with the following command, which will pull all images again:

kubectl annotate cachedimages --all --overwrite "timestamp=$(date +%s)"

Known issues

Conflicts with other mutating webhooks

Kuik's core functionality intercepts pod creation events to modify the definition of container images, facilitating image caching. However, some Kubernetes operators create pods autonomously and don't expect modifications to the image definitions (for exemple cloudnative-pg), the unexpected rewriting of the pod.specs.containers.image field can lead to inifinite reconciliation loop because the operator's expected target container image will be endlessly rewritten by the kuik MutatingWebhookConfiguration. In that case, you may want to disable kuik for specific pods using the following Helm values:

controllers:
  webhook:
    objectSelector:
      matchExpressions:
        - key: cnpg.io/podRole
          operator: NotIn
          values:
            - instance

Private images are a bit less private

Imagine the following scenario:

  • pods A and B use a private image, example.com/myimage:latest
  • pod A correctly references `imagePullSecrets, but pod B does not

On a normal Kubernetes cluster (without kuik), if pods A and B are on the same node, then pod B will run correctly, even though it doesn't reference imagePullSecrets, because the image gets pulled when starting pod A, and once it's available on the node, any other pod can use it. However, if pods A and B are on different nodes, pod B won't start, because it won't be able to pull the private image. Some folks may use that to segregate sensitive image to specific nodes using a combination of taints, tolerations, or node selectors.

Howevever, when using kuik, once an image has been pulled and stored in kuik's registry, it becomes available for any node on the cluster. This means that using taints, tolerations, etc. to limit sensitive images to specific nodes won't work anymore.

Cluster autoscaling delays

With kuik, all image pulls (except in the namespaces excluded from kuik) go through kuik's registry proxy, which runs on each node thanks to a DaemonSet. When a node gets added to a Kubernetes cluster (for instance, by the cluster autoscaler), a kuik registry proxy Pod gets scheduled on that node, but it will take a brief moment to start. During that time, all other image pulls will fail. Thanks to Kubernetes automatic retry mechanisms, they will eventually succeed, but on new nodes, you may see Pods in ErrImagePull or ImagePullBackOff status for a minute before everything works correctly. If you are using cluster autoscaling and try to achieve very fast scale-up times, this is something that you might want to keep in mind.

More Repositories

1

x509-certificate-exporter

A Prometheus exporter to monitor x509 certificates expiration in Kubernetes clusters or standalone
Go
637
star
2

helm-charts

A collection of Helm packages brought to you by Enix Monkeys 🐵
Smarty
56
star
3

DiskMap

OpenSolaris/OpenIndiana utility to manage drive and map wmn device name (c1txxx) to real drive using lsi sas controller
Python
41
star
4

netbox-prometheus-sd

Service discovery for Prometheus using devices from Netbox
Python
39
star
5

san-iscsi-csi

Dothill (Seagate) AssuredSAN dynamic provisioner for Kubernetes (CSI plugin).
Go
26
star
6

hpmsa_exporter

Prometheus exporter for HP MSA storage SAN
Python
23
star
7

routeros-rest-exporter

A Prometheus exporter for Mikrotik's RouterOS that uses the recent REST API and can be easily extended to support more metrics.
Python
22
star
8

ansible-proxmox-ve

Ansible role to deploy and configure Proxmox VE
Jinja
14
star
9

ansible-kubeadm

Aims to manage kubeadm based cluster via ansible
Python
12
star
10

kubecoin

Exercices for https://container.training
Smarty
11
star
11

ansible-postgresql

Ansible role to deploy PostgreSQL software
Jinja
8
star
12

ansible-teleport

Ansible module for teleport installation (goteleport.com)
Jinja
8
star
13

terraform-openstack-kubernetes

Deploy kubernetes 1.13, OpenStack integration included with cloud-controller-manager off-tree, based on network add-on kube-router, via terraform as the unique dependency
HCL
8
star
14

arista-eapi-exporter

A Prometheus exporter for Arista's EOS that uses the eAPI and can be easily extended to support more metrics.
Python
7
star
15

konfplate

A docker image aimed to be used as an initContainer in Kubernetes to facilitate configuration templating
Go
6
star
16

banana

Enterprise-grade, fully secured backuping system
Go
6
star
17

dothill-api-go

A Dothill (Seagate) AssuredSAN client library written in Go
Go
4
star
18

ring-mtr

Automatically run bi-directional My Traceroutes (MTRs) between NLNOG Ring nodes.
Python
4
star
19

gyro-enix

Python
3
star
20

ansible-mongodb

Ansible role to deploy mongodb Database software
2
star
21

flatcar-proxmoxve

Flatcar Proxmox VE cloud image
Shell
2
star
22

ansible-deb

Ansible role to configure Deb reposiroties of Debian based distribution
2
star
23

swift-exporter

A Prometheus exporter for Swift Object Storage focusing on authentication
Python
2
star
24

ansible-logstash

Ansible role to deploy logstash software
Ruby
2
star
25

IrqTop

Very basic parsing of /proc/interrupts
Python
1
star
26

pve_tasks_exporter

Prometheus exporter for Proxmox VE tasks
Python
1
star
27

ansible-influxdata-repo

Ansible role to deploy influxdata software packages repositories
Ruby
1
star
28

ansible-influxdb

Ansible role to deploy InfluxDB Database software
Ruby
1
star
29

ansible-postgresql_replication

Ansible role to deploy PostgreSQL replication
Ruby
1
star
30

collectd-py-web

Collectdweb reimplemented in python.
JavaScript
1
star
31

pvecontrol

Proxmox VE control CLI
Python
1
star
32

ansible-filebeat

Ansible role to deploy filebeat from Elastic.co
Ruby
1
star
33

ansible-elastic-repo

Ansible role to deploy elastic.co software repository
Ruby
1
star
34

ansible-ucarp

Ansible role to deploy ucarp software.
Jinja
1
star
35

ansible-mariadb

Ansible role to deploy MariaDB database
Ruby
1
star
36

docker-compose

A very basic container image based on docker:stable and docker-compose:latest. Quite useful for gitlab CI/CD
Dockerfile
1
star