• Stars
    star
    205
  • Rank 184,527 (Top 4 %)
  • Language
    Go
  • License
    Apache License 2.0
  • Created about 5 years ago
  • Updated 10 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Write controllers like a boss

Wrangler

Most people writing controllers are a bit lost as they find that there is nothing in Kubernetes that is like type Controller interface where you can just do NewController. Instead a controller is really just a pattern of how you use the generated clientsets, informers, and listers combined with some custom event handlers and a workqueue.

Wrangler is a framework for using controllers. Controllers wrap clients, informers, listers into a simple usable controller pattern that promotes some good practices.


Some Projects that use Wrangler

rancher

eks-operator

aks-operator

gke-operator

Versioning and Updates

Wrangler releases use semantic versioning. New major releases are created for breaking changes, new minor releases are created for features, and patches are added for everything else.

The most recent Major.Minor.x release and any releases being used by the most recent patch version of a supported rancher version will be maintained. The most recent major will receive minor releases, along with patch releases on its most up to date minor release. Older Major.Minor.x releases still in use by rancher will receive security patches at minimum. Consequently, there will be 1-3 maintained releases of the form Major.Minor.x at a time. Currently maintained versions:

Wrangler Version Rancher Version Update Level
1.0.x 2.6.x Security Fixes
1.1.x 2.7.x Bug and Security Fixes

Wrangler releases are not from the default branch. Instead they are from branches with the naming pattern release-MAJOR.MINOR. The default branch (i.e. master) is where changes initially go. This includes bug fixes and new features. Bug fixes are cherry-picked to release branches to be included in patch releases. When it's time to create a new minor or major release, a new release branch is created from the default branch.


Table of Contents

  1. How it Works
    1. Useful Definitions
  2. How to Use Wrangler
    1. How to Write and Register a Handler
      1. Creating an Instance of a Controller
    2. How to Run Handlers
    3. Different Ways of Interacting with Objects
    4. A Look at Structures Used in Wrangler

How it Works

Wrangler provides a code generator that will generate the clientset, informers, listers and additionally generate a controller per resource type. The interface to the controller can be seen in the Looking at Structures Used in Wrangler section.


The controller interface along with other helpful structs, interfaces, and functions are provided by another project lasso. Lasso ties together the aforementioned tools while wrangler leverages them in a user friendly way.

To use the controller to run custom code for Kubernetes resource types all one needs to do is register OnChange handlers and run the controller. Also using the controller interface one can access the client and caches through a simple flat API.

A typical, non-wrangler Kubernetes application would most likely use an informer for a resource type to add an event handler. Instead, wrangler uses lasso to register each handler which then aggregates the handlers into one function that accepts an object for the controller's resource type and then runs that object through all the handlers. This function is then registered to the Kubernetes informer for that controller's respective resource type. This is done so that an object can run through the handlers in a serialized way. This allows each handler to receive the updated version of the object and avoid many conflicts that would otherwise occur if the handlers were not chained together in this fashion.


Useful Definitions:

factory
Factories manage controllers. Wrangler generates factories for each API group. Wrangler factories use lasso shared factories for caches and controllers underneath. The lasso factories do most of the heavy lifting but are more resource type agnostic. Wrangler wraps lasso's factories to provide resource type specific clients and controllers. When accessing a wrangler generated controller, a controller for that resource type is requested from a lasso factory. If the controller exists it will be returned. Otherwise, the lasso factory will create it, persist it, and return it. You can consult the [lasso](https://github.com/rancher/lasso) repository for more details on factories.
informers
Broadcasts events for a given resource type and can register handlers for those events.
listers
Sometimes referred to as a cache, uses informers to update a local list of objects for a certain resource type to avoid making requests to the K8s API.
event handlers
Functions that run when a particular event is applied to the resource type the event handler is assigned to.
workqueue
A queue of items to be processed. In this context a queue will usually be a queue of objects of a certain resource type waiting to be processed by all handlers assigned to that resource type.

How to Use Wrangler

Generate controllers for CRDs by using Run() from the controllergen package. This will look like the following:

controllergen.Run(args.Options{
		OutputPackage: "github.com/rancher/rancher/pkg/generated",
		Boilerplate:   "scripts/boilerplate.go.txt",
		Groups: map[string]args.Group{
			"management.cattle.io": {
				PackageName: "management.cattle.io",
				Types: []interface{}{
					// All structs with an embedded ObjectMeta field will be picked up
					"./pkg/apis/management.cattle.io/v3",
					// ProjectCatalog and ClusterCatalog are named
					// explicitly here because they do not have an
					// ObjectMeta field in their struct. Instead
					// they embed type v3.Catalog{} which
					// is a valid object on its own and is generated
					// above.
					v3.ProjectCatalog{},
					v3.ClusterCatalog{},
				},
				GenerateTypes: true,
			},
			"ui.cattle.io": {
				PackageName: "ui.cattle.io",
				Types: []interface{}{
					"./pkg/apis/ui.cattle.io/v1",
				},
				GenerateTypes: true,
			},
		},
	})

For the structs to be used when generating controllers they must have the following comments above the structs (note the newline between the comment and struct so it is not rejected by linters):

// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object

Four types are shown below. This file would be located at "pkg/apis/management.cattle.io/v3" relative to the project root directory. The line passing the "./pkg/apis/management.cattle.io/v3" path ensure that the Setting and Catalog controllers are generated. The lines naming the ProjectCatalog and ClusterCatalog structs ensure the respective controllers are generated since neither directly have an ObjectMeta field.:

import (
	"github.com/rancher/norman/types"
)

// +genclient
// +genclient:nonNamespaced
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object

type Setting struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`

	Value      string `json:"value" norman:"required"`
	Default    string `json:"default" norman:"nocreate,noupdate"`
	Customized bool   `json:"customized" norman:"nocreate,noupdate"`
	Source     string `json:"source" norman:"nocreate,noupdate,options=db|default|env"`
}

// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object

type ProjectCatalog struct {
	types.Namespaced

	Catalog     `json:",inline" mapstructure:",squash"`
	ProjectName string `json:"projectName,omitempty" norman:"type=reference[project]"`
}

// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object

type ClusterCatalog struct {
	types.Namespaced

	Catalog     `json:",inline" mapstructure:",squash"`
	ClusterName string `json:"clusterName,omitempty" norman:"required,type=reference[cluster]"`
}

// +genclient
// +genclient:nonNamespaced
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object

type Catalog struct {
	metav1.TypeMeta `json:",inline"`
	// Standard object’s metadata. More info:
	// https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#metadata
	metav1.ObjectMeta `json:"metadata,omitempty"`
	// Specification of the desired behavior of the catalog. More info:
	// https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#spec-and-status
	Spec   CatalogSpec   `json:"spec"`
	Status CatalogStatus `json:"status"`
}

Note: This is real code taken from rancher and may not run at the time of reading this. This is meant to provide an example of how one might begin to use wrangler.


Creating an Instance of a Controller

Controllers are categorized by their API group and bundled into a struct called a factory. Functions to create factories are generated by the Run function discussed above. To run one of the functions that creates a factory, import the proper package from the output directory of the generated code.

import (
	"github.com/rancher/rancher/pkg/generated/controllers/management.cattle.io"
	"k8s.io/client-go/rest"
)

func createFactory(config *rest.Config) {
	mgmt, err := management.NewFactoryFromConfig(restConfig)
	if err != nil {
		return nil, err
	}
}

// Running the functions Management() and V3(), which are the api group and version of the resource types I have generated in this example, is necessary
// to instantiate the controller factories for the group and version. User() instantiates the controller for the user resource. This
// can be done elsewhere, like when creating a struct but it must be done before the controller is run. Otherwise, the cache will
// not work. In this case we are registering a handler so we would have ended up using these methods by necessity, but if we wanted	
// to access a cache for another resource type in our handler then we also need to make sure it is instantiated in a similar fashion.
users := mgmt.Management().V3().User("")

How to Write and Register a Handler to a Controller

Registering a handler means to assign a handler to a specific Kubernetes resource's controller. These handlers will then run when the appropriate event occurs on an object of that controller's resource type.

This will be a continuation of our above example:

import (
	"context"

	"github.com/rancher/rancher/pkg/generated/controllers/management.cattle.io"
	"github.com/rancher/wrangler/pkg/generated/controllers/core"
	"k8s.io/client-go/rest"
)

mgmt, err := management.NewFactoryFromConfig(restConfig)
if err != nil {
	return nil, err
}

users := mgmt.Management().V3().User("")
// passing a namespace here is optional. If an empty string is passed then the client will look at
// all configmap objects from all namespaces
configmaps := core.Management().Core().Configmaps("examplenamespace")

syncHandler := func(id string, obj *v3.User) (*v3.User, error) {
	if obj != nil {
		return obj, nil
	}

	recordedNote := obj.Annotations != nil && obj.Annotations["wroteanoteaboutuser"] == "true"

	if recordedNote {
		// there already is a note, noop
		return obj, nil
	}

	// we are getting the "mainrecord" configmap from the configmap cache. The cache is maintained
	// locally and can try to fulfill requests without using the k8s api. This is much faster and
	// efficient, however it does not update immediately so it is possible that if the object
	// being requested was recently created that the cache will miss and return a not found error.
	// In this scenario you can either count on the handler reenqueueing and retrying or you can
	// just use the regular client.
	record, err := configmaps.Cache().Get("", "mainrecord")
	if err != nil {
		return obj, err
	}

	record.Data[obj.name] = "recorded"
	record, err = configmaps.Update(record)
	if err != nil {
		return obj, err
	}

	// This is done because obj is from the cache that is iterated over to run handlers and perform other tasks. If the subsequent
	// update fails then we will end up with an object on our cache that does not match the "truth" (how the object is in etcd).
	obj = obj.DeepCopy()

	if obj.Annotations == nil {
		obj.Anotations = make(map[string]string)
	}

	obj.Annotations["wroteanoteaboutuser"] = "true"

	// Here we are using the k8s client embedded onto the users controller to perform an update. This will go to the K8s API.
	return users.Update(obj)
}

users.OnChange(context.Background(), "user-example-annotate-note-handler", syncHandler)

How to Run Handlers

Now that we have registered an OnChange handler, we can run it like so: mgmt.Start(context.Background(), 50)


Different Ways of Interacting With Objects

In the above example, two clients and one cache are being used to interact with objects. A client can Create, Update, UpdateStatus, Delete, Get, Watch and Patch an object, or List and Watch objects of its respective resource type. A Cache can get an object or list the objects for its respective resource type and will try to get the data locally (from its cache) if possible. The client and cache are the most common ways to interact with an object using wrangler.

Another way to interact with objects is to use the Apply client. The apply client works similarly to applying yaml using kubectl. This has benefits such as not assuming the existence of an object like the Update method on a client does. Instead, you can apply a state and the object will be created if it does not exist already or be updated to match the passed desired state if the object does exist already. Apply also allows the use of multiple Owner References in a way unique from the client- if any owner reference is deleted the object will be deleted.


A Look at Structures Used in Wrangler

type FooController interface {
	FooClient

	// OnChange registers a handler that will run whenever an object of the matching resource type is created or updated. This function accepts a sync function specifically generated for the object type and then wraps the function in a function that is compatible with AddGenericHandler. It then uses AddGenericHandler to register the wrapped function.
	OnChange(ctx context.Context, name string, sync FooHandler)
	// OnRemove registers a handler that will run whenever an object of the matching resource type is removed. This function accepts a sync function specifically generated for the object type and then wraps the function in a function that is compatible with AddGenericRemoveHandler. It then uses AddGenericRemoveHandler to register the wrapped function.
	OnRemove(ctx context.Context, name string, sync FooHandler)
	// Enqueue will rerun all handlers registered to the object's type against the object
	Enqueue(namespace, name string)

	// Cache returns a locally maintained cache that can be used for get and list requests
	Cache() FooCache

	// Informer returns an informer for the resource type
	Informer() cache.SharedIndexInformer
	// GroupVersionKind returns the API group, version, and Kind of the resource type the controller is for
	GroupVersionKind() schema.GroupVersionKind

	// AddGenericHandler registers the handler function for the controller
	AddGenericHandler(ctx context.Context, name string, handler generic.Handler)
	// AddGenericRemoveHandler registers a handler that will happen when an object of the controller's resource type is removed
	AddGenericRemoveHandler(ctx context.Context, name string, handler generic.Handler)
	// Updater returns a function that accepts a runtime.Object and asserts it as the controller's respective resource type struct. It then passes the object to the resource type's client's update function. This is mainly consumed internally by wrangler to implement other functionality.
	Updater() generic.Updater
}

type FooClient interface {
	// Create creates a new instance of resource type in kubernetes
	Create(*v1alpha1.Foo) (*v1alpha1.Foo, error)
	// Update updates the given object in kubernetes
	Update(*v1alpha1.Foo) (*v1alpha1.Foo, error)
	// Status of type's CRD must be a subresource for this method to be generated. Only updates
	// status and does not trigger OnChange handlers.
	UpdateStatus(*v1alpha1.Foo) (*v1alpha1.Foo, error)
	// Delete deletes the given object in kubernetes
	Delete(namespace, name string, options *metav1.DeleteOptions) error
	// Get gets the object of the given name and namespace in kubernetes
	Get(namespace, name string, options metav1.GetOptions) (*v1alpha1.Foo, error)
	// List lists all the objects matching the given namespace for the resource type
	List(namespace string, opts metav1.ListOptions) (*v1alpha1.FooList, error)
	// Watch returns a channel that will stream objects as they are created, removed, or updated
	Watch(namespace string, opts metav1.ListOptions) (watch.Interface, error)
	// Patch accepts a diff that can be applied to an existing object for the client's resource type. Depending on PatchType, which specifies the strategy
	// to be used when applying the diff, patch can also create a new object. See the following for more information: https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/, https://kubernetes.io/docs/reference/using-api/server-side-apply/.
	Patch(namespace, name string, pt types.PatchType, data []byte, subresources ...string) (result *v1alpha1.Foo, err error)
}

type FooCache interface {
	// Get gets the object of the given name and namespace from the local cache
	Get(namespace, name string) (*v1alpha1.Foo, error)
	// List lists all the objects matching the given namespace for the resource type from the cache
	List(namespace string, selector labels.Selector) ([]*v1alpha1.Foo, error)

	// AddIndexer is used to register a function will be used to organize objects in the cache. The indexer will return a string which indicates something about the object.
	AddIndexer(indexName string, indexer FooIndexer)
	// GetByIndex will search for objects that match the given key when the named indexer is applied to it
	GetByIndex(indexName, key string) ([]*v1alpha1.Foo, error)
}

type FooIndexer func(obj *v1alpha1.Foo) ([]string, error)

More Repositories

1

rancher

Complete container management platform
Go
22,538
star
2

os

Tiny Linux distro that runs the entire OS as Docker containers
Go
6,437
star
3

k3os

Purpose-built OS for Kubernetes, fully managed by Kubernetes.
Go
3,403
star
4

rke

Rancher Kubernetes Engine (RKE), an extremely simple, lightning fast Kubernetes distribution that runs entirely within containers.
Go
3,138
star
5

rio

Application Deployment Engine for Kubernetes
Go
2,282
star
6

local-path-provisioner

Dynamically provisioning persistent local storage with Kubernetes
Go
1,938
star
7

fleet

Deploy workloads from Git to large fleets of Kubernetes clusters
Go
1,450
star
8

convoy

A Docker volume plugin, managing persistent container volumes.
Go
1,308
star
9

rke2

Go
1,028
star
10

old-vm

(OBSOLETE) Package and Run Virtual Machines as Docker Containers
Go
646
star
11

ui

Rancher UI
JavaScript
587
star
12

cattle

Infrastructure orchestration engine for Rancher 1.x
Java
574
star
13

k3c

Lightweight local container engine for container development
Go
571
star
14

system-upgrade-controller

In your Kubernetes, upgrading your nodes
Go
502
star
15

dashboard

The Rancher UI
Vue
410
star
16

community-catalog

Catalog entries contributed by the community
Smarty
384
star
17

charts

Github based Helm Chart Index Repository providing charts crafted for Rancher Manager
Smarty
381
star
18

install-docker

Scripts for docker-machine to install a particular docker version
Shell
361
star
19

dapper

Docker build wrapper
Go
358
star
20

quickstart

HCL
357
star
21

cli

Rancher CLI
Go
331
star
22

terraform-provider-rke

Terraform provider plugin for deploy kubernetes cluster by RKE(Rancher Kubernetes Engine)
Go
328
star
23

opni

Multi Cluster Observability with AIOps
Go
323
star
24

kim

In ur kubernetes, buildin ur imagez
Go
323
star
25

trash

Minimalistic Go vendored code manager
Go
296
star
26

terraform-controller

Use K8s to Run Terraform
Go
290
star
27

remotedialer

HTTP in TCP in Websockets in HTTP in TCP, Tunnel all the things!
Go
255
star
28

elemental-toolkit

❄️ The toolkit to build, ship and maintain cloud-init driven Linux derivatives based on container images
Go
251
star
29

elemental

Elemental is an immutable Linux distribution built to run Rancher and its corresponding Kubernetes distributions RKE2 and k3s. It is built using the Elemental-toolkit
Go
228
star
30

terraform-provider-rancher2

Terraform Rancher2 provider
Go
222
star
31

rancher-compose

Docker compose compatible client to deploy to Rancher
Go
214
star
32

os-vagrant

Ruby
176
star
33

rancher-catalog

Smarty
155
star
34

docs

Documentation for Rancher products (for 2.0/new site)
SCSS
140
star
35

fleet-examples

Fleet usage examples
Shell
140
star
36

catalog-dockerfiles

Dockerfiles for Rancher Catalog containers
Shell
131
star
37

rancher-cleanup

Shell
125
star
38

api-spec

Specification for Rancher REST API implementation
121
star
39

k8s-intro-training

HTML
114
star
40

ansible-playbooks

Rancher 1.6 Installation. Doesn't support Rancher 2.0
Python
113
star
41

sherdock

Docker Image Manager
JavaScript
110
star
42

norman

APIs on APIs on APIs
Go
108
star
43

docker-from-scratch

Tiny Docker in Docker
Go
105
star
44

lb-controller

Load Balancer for Rancher services via ingress controllers backed up by a Load Balancer provider of choice
Go
97
star
45

pipeline

Go
96
star
46

k3k

Kubernetes in Kubernetes
Go
89
star
47

container-crontab

Simple cron runner for containers
Go
88
star
48

backup-restore-operator

Go
88
star
49

terraform-modules

Rancher Terraform Modules
HCL
85
star
50

os2

EXPERIMENTAL: A Rancher and Kubernetes optimized immutable Linux distribution based on openSUSE
Go
82
star
51

system-charts

Mustache
82
star
52

vagrant

Vagrant file to stand up a Local Rancher install with 3 nodes
Shell
79
star
53

rancher-dns

A simple DNS server that returns different answers depending on the IP address of the client making the request
Go
79
star
54

giddyup

Go
78
star
55

kontainer-engine

Provisioning kubernetes cluster at ease
Go
78
star
56

go-rancher

Go language bindings for Rancher API
Go
74
star
57

go-skel

Skeleton for Rancher Go Microservices
Shell
71
star
58

runc-cve

CVE patches for legacy runc packaged with Docker
Dockerfile
69
star
59

terraform-k3s-aws-cluster

HCL
67
star
60

agent

Shell
64
star
61

external-dns

Service updating external DNS with Rancher services records for Rancher 1.6
Go
63
star
62

terraform-provider-rancher2-archive

[Deprecated] Use https://github.com/terraform-providers/terraform-provider-rancher2
Go
62
star
63

kontainer-driver-metadata

This repository is to keep information of k8s versions and their dependencies like k8s components flags and system addons images.
Go
62
star
64

gitjob

Go
59
star
65

types

Rancher API types
Go
59
star
66

rancher.github.io

HTML
58
star
67

ui-driver-skel

Skeleton Rancher UI driver for custom docker-machine drivers
JavaScript
58
star
68

rancher-docs

Rancher Documentation
JavaScript
57
star
69

rke2-charts

Shell
56
star
70

os-services

RancherOS Service Compose Templates
Shell
54
star
71

client-python

A Python client for Rancher APIs
Python
49
star
72

hyperkube

Rancher hyperkube images
44
star
73

rancher-cloud-controller-manager

A kubernetes cloud-controller-manager for the rancher cloud
Go
44
star
74

steve

Kubernetes API Translator
Go
43
star
75

rodeo

Smarty
43
star
76

cis-operator

Go
43
star
77

rancherd

Bootstrap Rancher and k3s/rke2
Go
42
star
78

partner-charts

A catalog based on applications from independent software vendors (ISVs). Most of them are SUSE Partners.
Smarty
42
star
79

10acre-ranch

Build Rancher environment on GCE
Shell
41
star
80

secrets-bridge

Go
40
star
81

terraform-rancher-server

HCL
39
star
82

storage

Rancher specific storage plugins
Shell
39
star
83

k8s-sql

Storage backend for Kubernetes using Go database/sql
Go
37
star
84

lasso

Low level generic controller framework
Go
36
star
85

server-chart

[Deprecated] Helm chart for Rancher server
Shell
36
star
86

os-packer

Shell
36
star
87

pipeline-example-go

Go
36
star
88

cluster-template-examples

35
star
89

system-tools

This repo is for tools helping with various cleanup tasks for rancher projects. Example: rancher installation cleanup
Go
35
star
90

elemental-operator

The Elemental operator is responsible for managing the OS versions and maintaining a machine inventory to assist with edge or baremetal installations.
Go
33
star
91

image-mirror

Shell
31
star
92

rancher-metadata

A simple HTTP server that returns EC2-style metadata information that varies depending on the source IP address making the request.
Go
31
star
93

os-base

Base file system for RancherOS images
Shell
31
star
94

websocket-proxy

Go
29
star
95

rke-tools

Tools container for supporting functions in RKE
Go
29
star
96

gdapi-python

Python Binding to API spec
Python
28
star
97

wins

Windows containers connect to Windows host
Go
28
star
98

api-ui

Embedded UI for any service that implements the Rancher API spec
JavaScript
27
star
99

turtles

Rancher CAPI extension
Go
27
star
100

migration-tools

Go
27
star