• Stars
    star
    272
  • Rank 151,235 (Top 3 %)
  • Language
    Shell
  • License
    Apache License 2.0
  • Created over 5 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A step-by-step tutorial showing OpenShift Pipelines

OpenShift Pipelines Tutorial

Welcome to the OpenShift Pipelines tutorial!

OpenShift Pipelines is a cloud-native, continuous integration and delivery (CI/CD) solution for building pipelines using Tekton. Tekton is a flexible, Kubernetes-native, open-source CI/CD framework that enables automating deployments across multiple platforms (Kubernetes, serverless, VMs, etc) by abstracting away the underlying details.

OpenShift Pipelines features:

  • Standard CI/CD pipeline definition based on Tekton
  • Build images with Kubernetes tools such as S2I, Buildah, Buildpacks, Kaniko, etc
  • Deploy applications to multiple platforms such as Kubernetes, serverless and VMs
  • Easy to extend and integrate with existing tools
  • Scale pipelines on-demand
  • Portable across any Kubernetes platform
  • Designed for microservices and decentralized teams
  • Integrated with the OpenShift Developer Console

This tutorial walks you through pipeline concepts and how to create and run a simple pipeline for building and deploying a containerized app on OpenShift, and in this tutorial, we will use Triggers to handle a real GitHub webhook request to kickoff a PipelineRun.

In this tutorial you will:

Prerequisites

You need an OpenShift 4 cluster in order to complete this tutorial. If you don't have an existing cluster, go to http://try.openshift.com and register for free in order to get an OpenShift 4 cluster up and running on AWS within minutes.

You will also use the Tekton CLI (tkn) through out this tutorial. Download the Tekton CLI by following instructions available on the CLI GitHub repository.

Concepts

Tekton defines a number of Kubernetes custom resources as building blocks in order to standardize pipeline concepts and provide a terminology that is consistent across CI/CD solutions. These custom resources are an extension of the Kubernetes API that let users create and interact with these objects using kubectl and other Kubernetes tools.

The custom resources needed to define a pipeline are listed below:

  • Task: a reusable, loosely coupled number of steps that perform a specific task (e.g. building a container image)
  • Pipeline: the definition of the pipeline and the Tasks that it should perform
  • TaskRun: the execution and result of running an instance of task
  • PipelineRun: the execution and result of running an instance of pipeline, which includes a number of TaskRuns

Tekton Architecture

In short, in order to create a pipeline, one does the following:

  • Create custom or install existing reusable Tasks
  • Create a Pipeline and PipelineResources to define your application's delivery pipeline
  • Create a PersistentVolumeClaim to provide the volume/filesystem for pipeline execution or provide a VolumeClaimTemplate which creates a PersistentVolumeClaim
  • Create a PipelineRun to instantiate and invoke the pipeline

For further details on pipeline concepts, refer to the Tekton documentation that provides an excellent guide for understanding various parameters and attributes available for defining pipelines.

The Tekton API enables functionality to be separated from configuration (e.g. Pipelines vs PipelineRuns) such that steps can be reusable.

Triggers extends the Tekton architecture with the following CRDs:

  • TriggerTemplate - Templates resources to be created (e.g. Create PipelineResources and PipelineRun that uses them)
  • TriggerBinding - Validates events and extracts payload fields
  • Trigger - combines TriggerTemplate, TriggerBindings and interceptors.
  • EventListener - provides an addressable endpoint (the event sink). Trigger is referenced inside the EventListener Spec. It uses the extracted event parameters from each TriggerBinding (and any supplied static parameters) to create the resources specified in the corresponding TriggerTemplate. It also optionally allows an external service to pre-process the event payload via the interceptor field.
  • ClusterTriggerBinding - A cluster-scoped TriggerBinding

Using tektoncd/triggers in conjunction with tektoncd/pipeline enables you to easily create full-fledged CI/CD systems where the execution is defined entirely through Kubernetes resources.

You can learn more about triggers by checking out the docs

In the following sections, you will go through each of the above steps to define and invoke a pipeline.

Install OpenShift Pipelines

OpenShift Pipelines is provided as an add-on on top of OpenShift that can be installed via an operator available in the OpenShift OperatorHub. Follow these instructions in order to install OpenShift Pipelines on OpenShift via the OperatorHub.

OpenShift OperatorHub

Deploy Sample Application

Create a project for the sample application that you will be using in this tutorial:

$ oc new-project pipelines-tutorial

OpenShift Pipelines automatically adds and configures a ServiceAccount named pipeline that has sufficient permissions to build and push an image. This service account will be used later in the tutorial.

Run the following command to see the pipeline service account:

$ oc get serviceaccount pipeline

You will use the simple application during this tutorial, which has a frontend and backend

You can also deploy the same applications by applying the artifacts available in k8s directory of the respective repo

If you deploy the application directly, you should be able to see the deployment in the OpenShift Web Console by switching over to the Developer perspective of the OpenShift Web Console. Change from Administrator to Developer from the drop down as shown below:

Developer Perspective

Make sure you are on the pipelines-tutorial project by selecting it from the Project dropdown menu. Either search for pipelines-tutorial in the search bar or scroll down until you find pipelines-tutorial and click on the name of your project.

Projects

Install Tasks

Tasks consist of a number of steps that are executed sequentially. Tasks are executed/run by creating TaskRuns. A TaskRun will schedule a Pod. Each step is executed in a separate container within the same pod. They can also have inputs and outputs in order to interact with other tasks in the pipeline.

Here is an example of a Maven task for building a Maven-based Java application:

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: maven-build
spec:
  workspaces:
   -name: filedrop
  steps:
  - name: build
    image: maven:3.6.0-jdk-8-slim
    command:
    - /usr/bin/mvn
    args:
    - install

When a task starts running, it starts a pod and runs each step sequentially in a separate container on the same pod. This task happens to have a single step, but tasks can have multiple steps, and, since they run within the same pod, they have access to the same volumes in order to cache files, access configmaps, secrets, etc. You can specify volume using workspace. It is recommended that Tasks uses at most one writeable Workspace. Workspace can be secret, pvc, config or emptyDir.

Note that only the requirement for a git repository is declared on the task and not a specific git repository to be used. That allows tasks to be reusable for multiple pipelines and purposes. You can find more examples of reusable tasks in the Tekton Catalog and OpenShift Catalog repositories.

Install the apply-manifests and update-deployment tasks from the repository using oc or kubectl, which you will need for creating a pipeline in the next section:

$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/01_pipeline/01_apply_manifest_task.yaml

$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/01_pipeline/02_update_deployment_task.yaml

You can take a look at the tasks you created using the Tekton CLI:

$ tkn task ls

NAME                AGE
apply-manifests     10 seconds ago
update-deployment   4 seconds ago

We will be using buildah clusterTasks, which gets installed along with Operator. Operator installs few ClusterTask which you can see.

$ tkn clustertasks ls
NAME                       DESCRIPTION   AGE
buildah                                  1 day ago
buildah-v0-14-3                          1 day ago
git-clone                                1 day ago
s2i-php                                  1 day ago
tkn                                      1 day ago

Create Pipeline

A pipeline defines a number of tasks that should be executed and how they interact with each other via their inputs and outputs.

In this tutorial, you will create a pipeline that takes the source code of the application from GitHub and then builds and deploys it on OpenShift.

Pipeline Diagram

Here is the YAML file that represents the above pipeline:

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: build-and-deploy
spec:
  workspaces:
  - name: shared-workspace
  params:
  - name: deployment-name
    type: string
    description: name of the deployment to be patched
  - name: git-url
    type: string
    description: url of the git repo for the code of deployment
  - name: git-revision
    type: string
    description: revision to be used from repo of the code for deployment
    default: master
  - name: IMAGE
    type: string
    description: image to be build from the code
  tasks:
  - name: fetch-repository
    taskRef:
      name: git-clone
      kind: ClusterTask
    workspaces:
    - name: output
      workspace: shared-workspace
    params:
    - name: url
      value: $(params.git-url)
    - name: subdirectory
      value: ""
    - name: deleteExisting
      value: "true"
    - name: revision
      value: $(params.git-revision)
  - name: build-image
    taskRef:
      name: buildah
      kind: ClusterTask
    params:
    - name: IMAGE
      value: $(params.IMAGE)
    workspaces:
    - name: source
      workspace: shared-workspace
    runAfter:
    - fetch-repository
  - name: apply-manifests
    taskRef:
      name: apply-manifests
    workspaces:
    - name: source
      workspace: shared-workspace
    runAfter:
    - build-image
  - name: update-deployment
    taskRef:
      name: update-deployment
    params:
    - name: deployment
      value: $(params.deployment-name)
    - name: IMAGE
      value: $(params.IMAGE)
    runAfter:
    - apply-manifests

Once you deploy the pipelines, you should be able to visualize pipeline flow in the OpenShift Web Console by switching over to the Developer perspective of the OpenShift Web Console. select pipeline tab, select project as pipelines-tutorial and click on pipeline build-and-deploy

Pipeline-view

This pipeline helps you to build and deploy backend/frontend, by configuring right resources to pipeline.

Pipeline Steps:

  1. Clones the source code of the application from a git repository by referring (git-url and git-revision param)
  2. Builds the container image of application using the buildah clustertask that uses Buildah to build the image
  3. The application image is pushed to an image registry by refering (image param)
  4. The new application image is deployed on OpenShift using the apply-manifests and update-deployment tasks.

You might have noticed that there are no references to the git repository or the image registry it will be pushed to in pipeline. That's because pipeline in Tekton are designed to be generic and re-usable across environments and stages through the application's lifecycle. Pipelines abstract away the specifics of the git source repository and image to be produced as PipelineResources or Params. When triggering a pipeline, you can provide different git repositories and image registries to be used during pipeline execution. Be patient! You will do that in a little bit in the next section.

The execution order of task is determined by dependencies that are defined between the tasks via inputs and outputs as well as explicit orders that are defined via runAfter.

workspaces field allow you to specify one or more volumes that each Task in the Pipeline requires during execution. You specify one or more Workspaces in the workspaces field.

Create the pipeline by running the following:

$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/01_pipeline/04_pipeline.yaml

Alternatively, in the OpenShift Web Console, you can click on the + at the top right of the screen while you are in the pipelines-tutorial project:

OpenShift Console - Import Yaml

Upon creating the pipeline via the web console, you will be taken to a Pipeline Details page that gives an overview of the pipeline you created.

Check the list of pipelines you have created using the CLI:

$ tkn pipeline ls

NAME               AGE            LAST RUN   STARTED   DURATION   STATUS
build-and-deploy   1 minute ago   ---        ---       ---        ---

Trigger Pipeline

Now that the pipeline is created, you can trigger it to execute the tasks specified in the pipeline.

Note :-

If you are not into the pipelines-tutorial namespace, and using another namespace for the tutorial steps, please make sure you update the frontend and backend image resource to the correct url with your namespace name like so :

image-registry.openshift-image-registry.svc:5000/<namespace-name>/pipelines-vote-api:latest

A PipelineRun is how you can start a pipeline and tie it to the persistentVolumeClaim and params that should be used for this specific invocation.

Lets start a pipeline to build and deploy backend application using tkn:

$ tkn pipeline start build-and-deploy \
    -w name=shared-workspace,volumeClaimTemplateFile=https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/01_pipeline/03_persistent_volume_claim.yaml \
    -p deployment-name=pipelines-vote-api \
    -p git-url=https://github.com/openshift/pipelines-vote-api.git \
    -p IMAGE=image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/pipelines-vote-api \
    --use-param-defaults


Pipelinerun started: build-and-deploy-run-z2rz8

In order to track the pipelinerun progress run:
tkn pipelinerun logs build-and-deploy-run-z2rz8 -f -n pipelines-tutorial

Similarly, start a pipeline to build and deploy frontend application:

$ tkn pipeline start build-and-deploy \
    -w name=shared-workspace,volumeClaimTemplateFile=https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/01_pipeline/03_persistent_volume_claim.yaml \
    -p deployment-name=pipelines-vote-ui \
    -p git-url=https://github.com/openshift/pipelines-vote-ui.git \
    -p IMAGE=image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/pipelines-vote-ui \
    --use-param-defaults

Pipelinerun started: build-and-deploy-run-xy7rw

In order to track the pipelinerun progress run:
tkn pipelinerun logs build-and-deploy-run-xy7rw -f -n pipelines-tutorial

As soon as you start the build-and-deploy pipeline, a pipelinerun will be instantiated and pods will be created to execute the tasks that are defined in the pipeline.

$ tkn pipeline list
NAME               AGE             LAST RUN                     STARTED          DURATION   STATUS
build-and-deploy   6 minutes ago   build-and-deploy-run-xy7rw   36 seconds ago   ---        Running

Above we have started build-and-deploy pipeline, with relevant pipeline resources to deploy backend/frontend application using single pipeline

$ tkn pipelinerun ls
NAME                         STARTED         DURATION     STATUS
build-and-deploy-run-xy7rw   36 seconds ago   ---          Running
build-and-deploy-run-z2rz8   40 seconds ago   ---          Running

Check out the logs of the pipelinerun as it runs using the tkn pipeline logs command which interactively allows you to pick the pipelinerun of your interest and inspect the logs:

$ tkn pipeline logs -f
? Select pipelinerun:  [Use arrows to move, type to filter]
> build-and-deploy-run-xy7rw started 36 seconds ago
  build-and-deploy-run-z2rz8 started 40 seconds ago

After a few minutes, the pipeline should finish successfully.

$ tkn pipelinerun list

NAME                         STARTED      DURATION     STATUS
build-and-deploy-run-xy7rw   1 hour ago   2 minutes    Succeeded
build-and-deploy-run-z2rz8   1 hour ago   19 minutes   Succeeded

Looking back at the project, you should see that the images are successfully built and deployed.

Application Deployed

You can get the route of the application by executing the following command and access the application

$ oc get route pipelines-vote-ui --template='http://{{.spec.host}}'

If you want to re-run the pipeline again, you can use the following short-hand command to rerun the last pipelinerun again that uses the same workspaces, params and service account used in the previous pipeline run:

$ tkn pipeline start build-and-deploy --last

Whenever there is any change to your repository we need to start pipeline explicity to see new changes to take effect

Triggers

Triggers in conjuntion with pipelines enable us to hook our Pipelines to respond to external github events (push events, pull requests etc).

Prerequisites

You need an latest OpenShift 4 cluster running on AWS in order to complete this tutorial. If you don't have an existing cluster, go to http://try.openshift.com and register for free in order to get an OpenShift 4 cluster up and running on AWS within minutes.

NOTE: Running cluster localy crc won't work, as we need webhook-url to be accessable to github-repos

Adding Triggers to our Application:

Now let’s add a TriggerTemplate, TriggerBinding, and an EventListener to our project.

Trigger Template

A TriggerTemplate is a resource which have parameters that can be substituted anywhere within the resources of template.

The definition of our TriggerTemplate is given in 03_triggers/02-template.yaml.

apiVersion: triggers.tekton.dev/v1beta1
kind: TriggerTemplate
metadata:
  name: vote-app
spec:
  params:
  - name: git-repo-url
    description: The git repository url
  - name: git-revision
    description: The git revision
    default: master
  - name: git-repo-name
    description: The name of the deployment to be created / patched

  resourcetemplates:
  - apiVersion: tekton.dev/v1beta1
    kind: PipelineRun
    metadata:
      generateName: build-deploy-$(tt.params.git-repo-name)-
    spec:
      serviceAccountName: pipeline
      pipelineRef:
        name: build-and-deploy
      params:
      - name: deployment-name
        value: $(tt.params.git-repo-name)
      - name: git-url
        value: $(tt.params.git-repo-url)
      - name: git-revision
        value: $(tt.params.git-revision)
      - name: IMAGE
        value: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/$(tt.params.git-repo-name)
      workspaces:
      - name: shared-workspace
        volumeClaimTemplate:
          spec:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 500Mi
  • Run following command to apply Triggertemplate.
$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/02_template.yaml

Trigger Binding

TriggerBindings is a map enable you to capture fields from an event and store them as parameters, and replace them in triggerTemplate whenever an event occurs.

The definition of our TriggerBinding is given in 03_triggers/01_binding.yaml.

apiVersion: triggers.tekton.dev/v1beta1
kind: TriggerBinding
metadata:
  name: vote-app
spec:
  params:
  - name: git-repo-url
    value: $(body.repository.url)
  - name: git-repo-name
    value: $(body.repository.name)
  - name: git-revision
    value: $(body.head_commit.id)

The exact paths (keys) of parameter we need can be found by examining the event payload (eg: GitHub events).

Run following command to apply TriggerBinding.

$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/01_binding.yaml

Trigger

Trigger combines TriggerTemplate, TriggerBindings and interceptors. They are used as ref inside the EventListener.

The definition of our Trigger is given in 03_triggers/03_trigger.yaml.

apiVersion: triggers.tekton.dev/v1beta1
kind: Trigger
metadata:
  name: vote-trigger
spec:
  serviceAccountName: pipeline
  interceptors:
    - ref:
        name: "github"
      params:
        - name: "secretRef"
          value:
            secretName: github-secret
            secretKey: secretToken
        - name: "eventTypes"
          value: ["push"]
  bindings:
    - ref: vote-app
  template:
    ref: vote-app

The secret is to verify events are coming from correct source code management

---
apiVersion: v1
kind: Secret
metadata:
  name: github-secret
type: Opaque
stringData:
  secretToken: "1234567"

Run following command to apply Trigger.

$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/03_trigger.yaml

Event Listener

This component sets up a Service and listens for events. It also connects a TriggerTemplate to a TriggerBinding, into an addressable endpoint (the event sink)

The definition for our EventListener can be found in 03_triggers/04_event_listener.yaml.

apiVersion: triggers.tekton.dev/v1beta1
kind: EventListener
metadata:
  name: vote-app
spec:
  serviceAccountName: pipeline
  triggers:
    - triggerRef: vote-trigger
  • Run following command to create EventListener.
$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/04_event_listener.yaml

Note: EventListener will setup a Service. We need to expose that Service as an OpenShift Route to make it publicly accessible.

  • Run below command to expose eventlistener service as a route
$ oc expose svc el-vote-app

Configuring GitHub WebHooks

Now we need to configure webhook-url on backend and frontend source code repositories with the Route we exposed in the previously.

  • Run below command to get webhook-url
$ echo "URL: $(oc  get route el-vote-app --template='http://{{.spec.host}}')"

Note:

Fork the backend and frontend source code repositories so that you have sufficient privileges to configure GitHub webhooks.

Configure webhook manually

Open forked github repo (Go to Settings > Webhook) click on Add Webhook > Add

$ echo "$(oc  get route el-vote-app --template='http://{{.spec.host}}')"

to payload URL > Select Content type as application/json > Add secret eg: 1234567 > Click on Add Webhook

Add webhook

  • Follow above procedure to configure webhook on frontend repo

Now we should see a webhook configured on your forked source code repositories (on our GitHub Repo, go to Settings>Webhooks).

Webhook-final

Great!, We have configured webhooks

Trigger pipeline Run

When we perform any push event on the backend the following should happen.

  1. The configured webhook in vote-api GitHub repository should push the event payload to our route (exposed EventListener Service).

  2. The Event-Listener will pass the event to the TriggerBinding and TriggerTemplate pair.

  3. TriggerBinding will extract parameters needed for rendering the TriggerTemplate. Successful rendering of TriggerTemplate should create 2 PipelineResources (source-repo-vote-api and image-source-vote-api) and a PipelineRun (build-deploy-vote-api)

We can test this by pushing a commit to vote-api repository from GitHub web ui or from terminal.

Let’s push an empty commit to vote-api repository.

$ git commit -m "empty-commit" --allow-empty && git push origin master
...
Writing objects: 100% (1/1), 190 bytes | 190.00 KiB/s, done.
Total 1 (delta 0), reused 0 (delta 0)
To github.com:<github-username>/pipelines-vote-api.git
   72c14bb..97d3115  master -> master

Watch OpenShift WebConsole Developer perspective and a PipelineRun will be automatically created.

pipeline-run-api

More Repositories

1

origin

Conformance test suite for OpenShift
Go
8,372
star
2

source-to-image

A tool for building artifacts from source and injecting into container images
Go
2,379
star
3

openshift-ansible

Install and config an OpenShift 3.x cluster
Python
2,136
star
4

osin

Golang OAuth2 server library
Go
1,832
star
5

installer

Install an OpenShift 4.x cluster
1,312
star
6

okd

The self-managing, auto-upgrading, Kubernetes distribution for everyone
HCL
1,276
star
7

origin-server

OpenShift 2 (deprecated)
Ruby
885
star
8

openshift-docs

OpenShift 3 and 4 product and community documentation
HTML
687
star
9

microshift

A small form factor OpenShift/Kubernetes optimized for edge computing
Go
528
star
10

geard

geard is no longer maintained - see OpenShift 3 and Kubernetes
Go
407
star
11

hypershift

Hyperscale OpenShift - clusters with hosted control planes
Go
347
star
12

console

OpenShift Cluster Console UI
TypeScript
334
star
13

openshift-ansible-contrib

Additional roles and playbooks for OpenShift installation and management
Python
284
star
14

training

Shell
280
star
15

jenkins

Shell
260
star
16

release

Release tooling for OpenShift
Shell
229
star
17

ansible-service-broker

Ansible Service Broker
Go
226
star
18

hive

API driven OpenShift cluster provisioning and management
Go
222
star
19

rhc

OpenShift 2 Client Tools (deprecated)
Ruby
220
star
20

machine-config-operator

Go
218
star
21

jenkins-client-plugin

Java
218
star
22

cluster-monitoring-operator

Manage the OpenShift monitoring stack
Go
218
star
23

openshift-restclient-python

Python client for the OpenShift API
Python
205
star
24

library

Examples and Components for deploying into OpenShift
Go
163
star
25

openshift-tools

A public repository of scripts used by OpenShift Operations for various purposes
Python
161
star
26

enhancements

Enhancements tracking repository for OKD
Go
156
star
27

generic-admission-server

A library for writing admission webhooks based on k8s.io/apiserver
Go
153
star
28

oc

The OpenShift Command Line, part of OKD
Go
150
star
29

origin-aggregated-logging

JavaScript
142
star
30

machine-api-operator

Machine API operator
Go
137
star
31

origin-web-console

Web Console for the OpenShift Application Platform
JavaScript
123
star
32

svt

Shell
117
star
33

imagebuilder

Builds Dockerfile using the Docker client (with squashing! and secrets!)
Go
116
star
34

client-go

Go client for OpenShift
Shell
104
star
35

compliance-operator

Operator providing OpenShift cluster compliance checks
Go
100
star
36

telemeter

Prometheus push federation
Go
96
star
37

assisted-service

Go
90
star
38

cluster-network-operator

Create and manage cluster networking configuration
Go
85
star
39

cincinnati

Rust
84
star
40

vagrant-openshift

Ruby
83
star
41

cluster-logging-operator

Operator to support logging subsystem of OpenShift
Go
83
star
42

openshift-pep

Public Project Enhancement Proposals for the OpenShift product. Tracks and maintains high level architectural documents related to future OpenShift changes.
82
star
43

jenkins-plugin

Java
81
star
44

sriov-network-operator

SR-IOV Network Operator
Go
81
star
45

origin-metrics

Shell
78
star
46

openshift-restclient-java

Java
78
star
47

must-gather

A client tool for gathering information about an operator managed component.
Shell
77
star
48

api

Canonical location of the OpenShift API definition.
Go
75
star
49

sdn

Go
73
star
50

cluster-version-operator

Go
72
star
51

library-go

Helpers for going from apis and clients to useful runtime constructs
Go
72
star
52

os

Shell
71
star
53

cluster-etcd-operator

Operator to manage the lifecycle of the etcd members of an OpenShift cluster
Go
70
star
54

cluster-node-tuning-operator

Manage node-level tuning by orchestrating the tuned daemon.
Go
69
star
55

openshift-sdn

Go
69
star
56

openshift-origin-design

Design repository for all things OpenShift
SCSS
68
star
57

autoheal

Autoheals based on monitoring alerts
Go
66
star
58

elasticsearch-operator

Go
66
star
59

cluster-kube-apiserver-operator

The kube-apiserver operator installs and maintains the kube-apiserver on a cluster
Go
65
star
60

oadp-operator

OADP Operator
Go
64
star
61

openshift-java-client

Java Client for the OpenShift REST API
Java
63
star
62

rosa

Go
61
star
63

ovn-kubernetes

Kubernetes integration for OVN
Go
61
star
64

community

Community organizational documentations and process for OKD
61
star
65

federation-dev

Dev preview of federation
Shell
59
star
66

cluster-ingress-operator

The Cluster Ingress Operator manages highly available network ingress for OpenShift
Go
59
star
67

oc-mirror

Lifecycle manager for internet-disconnected OpenShift environments
Go
58
star
68

openshift-client-python

A python library for interacting with OpenShift via the OpenShift client binary.
Python
56
star
69

cincinnati-graph-data

Release node and upgrade edge metadata for Cincinnati graphs.
Python
56
star
70

router

Ingress controller for OpenShift
Go
55
star
71

cluster-image-registry-operator

The image registry operator installs+maintains the internal registry on a cluster
Go
55
star
72

cluster-operator

Go
52
star
73

local-storage-operator

Operator for local storage
Go
51
star
74

tektoncd-pipeline-operator

tektoncd-pipeline operator for Kubernetes to manage installation, updation and uninstallation of tekton-cd pipelines.
Go
51
star
75

pipelines-catalog

A repository for OpenShift Pipelines tasks
Python
50
star
76

openshift-azure

Azure Red Hat Openshift
Go
49
star
77

cloud-credential-operator

Manage cloud provider credentials as Kubernetes CRDs
Go
48
star
78

assisted-installer

Go
48
star
79

verification-tests

Blackbox test suite for OpenShift.
Gherkin
47
star
80

community.okd

OKD/Openshift collection for Ansible
Python
45
star
81

service-idler

A controller for idling and unidling groups of scalable Kubernetes resources
Go
44
star
82

ruby-hello-world

Hello world ruby sample for OpenShift v3
Ruby
44
star
83

managed-cluster-config

Static deployable artifacts for managed OSD clusters
HTML
42
star
84

cluster-authentication-operator

OpenShift operator for the top level Authentication and OAuth configs.
Go
41
star
85

image-registry

OpenShift cluster image registry
Go
40
star
86

sriov-cni

An SRIOV CNI plugin
Go
38
star
87

console-operator

The console operator installs and maintains the web console on a cluster
Go
38
star
88

openshift-extras

Unofficial tools for use with OpenShift
Ruby
38
star
89

runbooks

Runbooks for Alerts on OCP
Shell
37
star
90

cluster-kube-controller-manager-operator

The kube-controller-manager operator installs and maintains the kube-controller-manager on a cluster
Go
37
star
91

cluster-kube-descheduler-operator

An operator to run descheduler on OpenShift.
Go
37
star
92

assisted-test-infra

Python
36
star
93

windows-machine-config-operator

Windows MCO for OpenShift that handles addition of Windows nodes to the cluster
Go
36
star
94

insights-operator

Go
36
star
95

service-ca-operator

Controller to mint and manage serving certificates for Kubernetes services
Go
36
star
96

openshift-jee-sample

A sample app to be deployed on openshift environments
HTML
35
star
97

cluster-autoscaler-operator

Manage Kubernetes cluster-autoscaler deployments
Go
35
star
98

certman-operator

Operator to Manage Let's Encrypt certificates for OpenShift Clusters
Go
35
star
99

python-interface

Python
35
star
100

cluster-dns-operator

The Cluster DNS Operator manages cluster DNS services for OpenShift
Go
35
star