• Stars
    star
    111
  • Rank 314,510 (Top 7 %)
  • Language
    Groovy
  • License
    MIT License
  • Created about 4 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Reproducible infrastructure to showcase GitOps workflows and evaluate different GitOps Operators on Kubernetes

gitops-playground

Reproducible infrastructure to showcase GitOps workflows with Kubernetes.

In fact, this rolls out a complete DevOps stack with different features including

  • GitOps (with different controllers to choose from: Argo CD and Flux v2),
  • example applications and CI-pipelines (using Jenkins and the gitops-build-lib),
  • Notifications/Alerts (using Mailhog for demo purposes)
  • Monitoring (using Prometheus and Grafana),
  • Secrets management (using Vault and external secrets operator).

The gitops-playground is derived from our experiences in consulting, operating the myCloudogu platform and is used in our GitOps trainings for both Flux and ArgoCD.
For questions or suggestions you are welcome to join us at our myCloudogu community forum.

Discuss it on myCloudogu

Playground features Installation
Playground features Installation

TLDR;

You can run a local k8s cluster with the GitOps playground installed with only one command (on Linux)

bash <(curl -s \
  https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/init-cluster.sh) \
  && sleep 2 && docker run --rm -it --pull=always -u $(id -u) \
    -v ~/.k3d/kubeconfig-gitops-playground.yaml:/home/.kube/config \
    --net=host \
    ghcr.io/cloudogu/gitops-playground --yes --argocd --fluxv2

This command will also print URLs of the applications inside the cluster to get you started.
Note that you can also use only one of --argocd or --fluxv2 to select specific operators. This will also speed up the progress.

We recommend running this command as an unprivileged user, that is inside the docker group.

Table of contents

What is the GitOps Playground?

The GitOps Playground provides a reproducible environment for setting up a GitOps-Stack. It provides an image for automatically setting up a Kubernetes Cluster including CI-server (Jenkins), source code management (SCM-Manager), Monitoring and Alerting (Prometheus and Grafana), Secrets Management (Hashicorop Vault and External Secrets Operator) and of course GitOps operators: here you can choose between Flux V2 and Argo CD.

The playground also deploys a number of example applications.

The GitOps Playground lowers the barriers for getting your hands on GitOps. No need to read lots of books and operator docs, getting familiar with CLIs, ponder about GitOps Repository folder structures and staging, etc. The GitOps Playground is a pre-configured environment to see GitOps in motion, including more advanced use cases like notifications, monitoring and secrets management.

Installation

There a several options for running the GitOps playground

  • on a local k3d cluster
    NOTE: Currently runs only on linux!
    Running on Windows or Mac is possible in general, but we would need to bind all needed ports to k3d container.
    See our POC. Let us know if this feature is of interest to you.
  • on a remote k8s cluster
  • each with the option
    • to use an external Jenkins, SCM-Manager and registry (this can be run in production, e.g. with a Cloudogu Ecosystem) or
    • to run everything inside the cluster (for demo only)

The diagrams below show an overview of the playground's architecture and three scenarios for running the playground. For a simpler overview including all optional features such as monitoring and secrets management see intro at the very top.

Note that running Jenkins inside the cluster is meant for demo purposes only. The third graphic shows our production scenario with the Cloudogu EcoSystem (CES). Here better security and build performance is achieved using ephemeral Jenkins build agents spawned in the cloud.

Overview

Demo on local machine Demo on remote cluster Production environment with CES
Playground on local machine Playground on remote cluster A possible production environment

Create Cluster

If you don't have a demo cluster at hand we provide scripts to create either

  • a local k3d cluster (see docs or script for more details):
    bash <(curl -s \
      https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/init-cluster.sh)
  • a remote k8s cluster on Google Kubernetes Engine (e.g. via Terraform, see our docs),
  • or almost any k8s cluster.
    Note that if you want to deploy Jenkins inside the cluster, Docker is required as container runtime.

Apply playground

You can apply the playground to your cluster using our container image ghcr.io/cloudogu/gitops-playground.
On success, the container prints a little intro on how to get started with the GitOps playground.

There are several options for running the container:

  • For local k3d cluster, we recommend running the image as a local container via docker
  • For remote clusters (e.g. on GKE) you can run the image inside a pod of the target cluster via kubectl.

All options offer the same parameters, see below.

Apply via Docker (local cluster)

When connecting to k3d it is easiest to apply the playground via a local container in the host network and pass k3d's kubeconfig.

CLUSTER_NAME=gitops-playground
docker pull ghcr.io/cloudogu/gitops-playground
docker run --rm -it -u $(id -u) \
  -v ~/.k3d/kubeconfig-${CLUSTER_NAME}.yaml:/home/.kube/config \
  --net=host \
  ghcr.io/cloudogu/gitops-playground # additional parameters go here

Note:

  • docker pull in advance makes sure you have the newest image, even if you ran this command before.
    Of course, you could also specify a specific version of the image.
  • Using the host network makes it possible to determine localhost and to use k3d's kubeconfig without altering, as it access the API server via a port bound to localhost.
  • We run as the local user in order to avoid file permission issues with the kubeconfig-${CLUSTER_NAME}.yaml.
  • If you experience issues and want to access the full log files, use the following command while the container is running:
docker exec -it \
  $(docker ps -q  --filter ancestor=ghcr.io/cloudogu/gitops-playground) \
  bash -c -- 'tail -f  -n +1 /tmp/playground-log-*'

Apply via kubectl (remote cluster)

For remote clusters it is easiest to apply the playground via kubectl. You can find info on how to install kubectl here.

# Create a temporary ServiceAccount and authorize via RBAC.
# This is needed to install CRDs, etc.
kubectl create serviceaccount gitops-playground-job-executer -n default
kubectl create clusterrolebinding gitops-playground-job-executer \
  --clusterrole=cluster-admin \
  --serviceaccount=default:gitops-playground-job-executer

# Then start apply the playground with the following command
# The --remote parameter exposes Jenkins, SCMM and argo on well-known ports 
# for example, so you don't have to remember the individual ports
kubectl run gitops-playground -i --tty --restart=Never \
  --overrides='{ "spec": { "serviceAccount": "gitops-playground-job-executer" } }' \
  --image ghcr.io/cloudogu/gitops-playground \
  -- --yes --remote # additional parameters go here

# If everything succeeded, remove the objects
kubectl delete clusterrolebinding/gitops-playground-job-executer \
  sa/gitops-playground-job-executer pods/gitops-playground -n default  

In general docker run should work here as well. But GKE, for example, uses gcloud and python in their kubeconfig. Running inside the cluster avoids these kinds of issues.

Additional parameters

The following describes more parameters and use cases.

You can get a full list of all options like so:

docker run --rm ghcr.io/cloudogu/gitops-playground --help
Deploy GitOps operators
  • --argocd - deploy Argo CD GitOps operator
  • --fluxv2 - deploy Flux v2 GitOps operator
Deploy with local Cloudogu Ecosystem

See our Quickstart Guide on how to set up the instance.
Then set the following parameters.

# Note: 
# * In this case --password only sets the Argo CD admin password (Jenkins and 
#    SCMM are external)
# * Insecure is needed, because the local instance will not have a valid cert
--jenkins-url=https://192.168.56.2/jenkins \ 
--scmm-url=https://192.168.56.2/scm \
--jenkins-username=admin \
--jenkins-password=yourpassword \
--scmm-username=admin \
--scmm-password=yourpassword \
--password=yourpassword \
--insecure
Deploy with productive Cloudogu Ecosystem and GCR

Using Google Container Registry (GCR) fits well with our cluster creation example via Terraform on Google Kubernetes Engine (GKE), see our docs.

Note that you can get a free CES demo instance set up with a Kubernetes Cluster as GitOps Playground here.

# Note: In this case --password only sets the Argo CD admin password (Jenkins 
# and SCMM are external) 
--jenkins-url=https://your-ecosystem.cloudogu.net/jenkins \ 
--scmm-url=https://your-ecosystem.cloudogu.net/scm \
--jenkins-username=admin \
--jenkins-password=yourpassword \
--scmm-username=admin \
--scmm-password=yourpassword \
--password=yourpassword \
--registry-url=eu.gcr.io \
--registry-path=yourproject \
--registry-username=_json_key \ 
--registry-password="$( cat account.json | sed 's/"/\\"/g' )" 
Override default images used in the gitops-build-lib

Images used by the gitops-build-lib are set in the gitopsConfig in each Jenkinsfile of an application like that:

def gitopsConfig = [
    ...
    buildImages          : [
            helm: 'ghcr.io/cloudogu/helm:3.10.3-1',
            kubectl: 'lachlanevenson/k8s-kubectl:v1.25.4',
            kubeval: 'ghcr.io/cloudogu/helm:3.10.3-1',
            helmKubeval: 'ghcr.io/cloudogu/helm:3.10.3-1',
            yamllint: 'cytopia/yamllint:1.25-0.7'
    ],...

To override each image in all the applications you can use following parameters:

  • --kubectl-image someRegistry/someImage:1.0.0
  • --helm-image someRegistry/someImage:1.0.0
  • --kubeval-image someRegistry/someImage:1.0.0
  • --helmkubeval-image someRegistry/someImage:1.0.0
  • --yamllint-image someRegistry/someImage:1.0.0
Argo CD-Notifications

If you are using a remote cluster you can set the --argocd-url parameter so that argocd-notification messages have a link to the corresponding application.

Monitoring

Set the parameter --monitoring to enable deployment of monitoring and alerting tools like prometheus, grafana and mailhog.

See Monitoring tools for details.

Secrets Management

Set the parameter --vault=[dev|prod] to enable deployment of secret management tools hashicorp vault and external secrets operator. See Secrets management tools for details.

Remove playground

For k3d, you can just k3d cluster delete gitops-playground. This will delete the whole cluster. Right now, there is no way to remove the playground from a cluster completely. We are planning to implement this, though.

Stack

As described above the GitOps playground comes with a number of applications. Some of them can be accessed via web.

  • Jenkins
  • SCM-Manager
  • Argo CD
  • Prometheus/Grafana
  • Vault
  • Example applications for each GitOps operator, some with staging and production environments.

The URLs of the applications depend on the environment the playground is deployed to. The following lists all application and how to find out their respective URLs for a GitOps playground being deployed to local or remote cluster.

For remote clusters you need the external IP, no need to specify the port (everything running on port 80). Basically, you can get the IP address as follows:

kubectl -n "${namespace}" get svc "${serviceName}" \
  --template="{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}"

There is also a convenience script scripts/get-remote-url. The script waits, if externalIP is not present, yet. You could use this conveniently like so:

bash <(curl -s \
  https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/get-remote-url) \
  jenkins default

You can open the application in the browser right away, like so for example:

xdg-open $(bash <(curl -s \
  https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/get-remote-url) \
   jenkins default)

Credentials

If deployed within the cluster, all applications can be accessed via: admin/admin

Note that you can change (and should for a remote cluster!) the password with the --password argument. There also is a --username parameter, which is ignored for argocd. That is, for now argos username ist always admin.

Argo CD

Argo CD's web UI is available at

Argo CD is installed in a production-ready way, that allows for operating Argo CD with Argo CD, using GitOps and providing a repo per team pattern.

When installing the GitOps playground, the following steps are performed to bootstrap Argo CD:

  • The following repos are created and initialized:
    • argocd (management and config of Argo CD itself),
    • example-apps (example for a developer/application team's GitOps repo) and
    • cluster-resources (example for a cluster admin or infra/platform team's repo)
  • Argo CD is installed imperatively via a helm chart.
  • Two resources are applied imperatively to the cluster: an AppProject called argocd and an Application called bootstrap. These are also contained within the argocd repository.

From there everything is managed via GitOps. This diagram shows how it works.

  1. The bootstrap application manages the folder applications, which also contains bootstrap itself.
    With this, changes to the bootstrap application can be done via GitOps. The bootstrap application also deploys other apps (App Of Apps pattern)
  2. The argocd application manages the folder argocd which contains Argo CD's resources as an umbrella helm chart.
    The umbrella chart pattern allows describing the actual values in values.yaml and deploying additional resources (such as secrets and ingresses) via the templates folder. The actual ArgoCD chart is declared in the Chart.yaml
  3. The Chart.yaml contains the Argo CD helm chart as dependency. It points to a deterministic version of the Chart (pinned via Chart.lock) that is pulled from the Chart repository on the internet.
    This mechanism can be used to upgrade Argo CD via GitOps. See the Readme of the argocd repository for details.
  4. The projects application manages the projects folder, that contains the following AppProjects:
    • the argocd project, used for bootstrapping
    • the built-in default project (which is restricted to eliminate threats to security)
    • one project per team (to implement least privilege and also notifications per team):
      • cluster-resources (for platform admin, needs more access to cluster) and
      • example-apps (for developers, needs less access to cluster)
  5. The cluster-resources application points to the cluster-resources git repository (argocd folder), which has the typical folder structure of a GitOps repository (explained in the next step). This way, the platform admins use GitOps in the same way as their "customers" (the developers) and can provide better support.
  6. The example-apps application points to the example-apps git repository (argocd folder again). Like the cluster-resources, it also has the typical folder structure of a GitOps repository:
    • apps - contains the kubernetes resources of all applications (the actual YAML)
    • argocd - contains Argo CD Applications that point to subfolders of apps (App Of Apps pattern, again)
    • misc - contains kubernetes resources, that do not belong to specific applications (namespaces, RBAC, resources used by multiple apps, etc.)
  7. The misc application points to the misc folder
  8. The my-app-staging application points to the apps/my-app/staging folder within the same repo. This provides a folder structure for release promotion. The my-app-* applications implement the Environment per App Pattern. This pattern allows each application to have its own environments, e.g. production and staging or none at all. Note that the actual YAML here could either be pushed manually or using the CI server. The applications contain examples that push config changes from the app repo to the GitOps repo using the CI server. This implementation mixes the Repo per Team and Repo per App patterns
  9. The corresponding production environment is realizing using the my-app-production application, that points to the apps/my-app/production folder within the same repo.
    Note that it is recommended to protect the production folders from manual access, if supported by the SCM of your choice.
    Alternatively, instead of different YAMLs files as used in the diagram, these applications could be realized as
    • Two applications in the same YAML (implemented in the playground, see e.g. petclinic-plain.yaml)
    • Two application with the same name in different namespaces, when ArgoCD is enabled to search for applications within different namespaces (implemented in the playground, see Argo CD's values.yaml - application.namespaces setting)
    • One ApplicationSet, using the git generator for directories (not used in GitOps playground, yet)

To keep things simpler, the GitOps playground only uses one kubernetes cluster, effectively implementing the Standalone pattern. However, the repo structure could also be used to serve multiple clusters, in a Hub and Spoke pattern: Additional clusters could either be defined in the vaules.yaml or as secrets via the templates folder.

We're also working on an optional implementation of the namespaced pattern, using the Argo CD operator.

Why not use argocd-autopilot?

And advanced question: Why does the GitOps playground not use the argocd-autopilot?

The short answer is: As of 2023-05, version 0.4.15 it looks far from ready for production.

Here is a diagram that shows how the repo structure created by autopilot looks like:

Here are some thoughts why we deem it not a good fit for production:

  • The version of ArgoCD is not pinned.
    • Instead, the kustomization.yaml (3๏ธ in the diagram) points to a base within the autopilot repo, which in turn points to the stable branch of the Argo CD repo.
    • While it might be possible to pin the version using Kustomize, this is not the default and looks complicated.
    • A non-deterministic version calls for trouble. Upgrades of Argo CD might happen unnoticed.
    • What about breaking changes? What about disaster recovery?
  • The repository structure autopilot creates is more complicated (i.e. difficult to understand and maintain) than the one used in the playground
    • Why is the autopilot-bootstrap application (1๏ธ in the diagram) not within the GitOps repo and lives only in the cluster?
    • The approach of an ApplicationSet within the AppProject's yaml pointing to a config.json (more difficult to write than YAML) is difficult to grasp (4๏ธ and 6๏ธ in the diagram)
    • The cluster-resources ApplicationSet is a good approach to multi-cluster but again, requires writing JSON (4๏ธ in the diagram).
  • Projects are used to realize environments (6๏ธ and 7๏ธ in the diagram).
    How would we separate teams in this monorepo structure?
    One idea would be to use multiple Argo CD instances, realising a Standalone pattern. This would mean that every team would have to manage its own ArgoCD instance.
    How could this task be delegated to a dedicated platform team? These are the questions that lead to the structure realized in the GitOps playground.

Flux

Flux does not come with a UI out of the box. So if you want to communicate it, the flux CLI is the best option.

For Flux, the playground implements a monorepo pattern, that adheres to the repo structure created by the flux CLI:

fluxv2-repo structure

The position of the apps does not completely adhere to the recommended repo structure, though. See also #109.

For upgrading Flux, see the Readme of the flux repository

Jenkins

Jenkins is available at

You can enable browser notifications about build results via a button in the lower right corner of Jenkins Web UI.

Note that this only works when using localhost or https://.

Enable Jenkins Notifications

Example of a Jenkins browser notifications

External Jenkins

You can set an external jenkins server via the following parameters when applying the playground. See parameters for examples.

  • --jenkins-url,
  • --jenkins-username,
  • --jenkins-password

Note that the example applications pipelines will only run on a Jenkins that uses agents that provide a docker host. That is, Jenkins must be able to run e.g. docker ps successfully on the agent.

The user has to have the following privileges:

  • install plugins
  • set credentials
  • create jobs
  • restarting

SCM-Manager

SCM-Manager is available at

External SCM-Manager

You can set an external SCM-Manager via the following parameters when applying the playground. See Parameters for examples.

  • --scmm-url,
  • --scmm-username,
  • --scmm-password

The user on the scm has to have privileges to:

  • add / edit users
  • add / edit permissions
  • add / edit repositories
  • add / edit proxy
  • install plugins

Monitoring tools

Set the parameter --monitoring so the kube-prometheus-stack via its helm-chart is being deployed including Argo CD dashboards.

This leads to the following tools to be exposed:

Grafana can be used to query and visualize metrics via prometheus. Prometheus is not exposed by default.

In addition, argocd-notifications is set up. Applications deployed with Argo CD now will alert via email to mailhog the sync status failed, for example.

Note that this only works with Argo CD so far

Secrets Management Tools

Via the vault parameter, you can deploy Hashicorp Vault and the External Secrets Operator into your GitOps playground.

With this, the whole flow from secret value in Vault to kubernetes Secret via External Secrets Operator can be seen in action:

External Secret Operator <-> Vault - flow

For this to work, the GitOps playground configures the whole chain in Kubernetes and vault (when dev mode is used):

External Secret Operator Custom Resources

  • In k8s namespaces argocd-staging and argocd-production:
    • Creates SecretStore and ServiceAccount (used to authenticate with vault)
    • Creates ExternalSecrets
  • In Vault:
    • Create secrets for staging and prod
    • Create a human user for changing the secrets
    • Authorizes the service accounts on those secrets
  • Creates an example app that uses the secrets

dev mode

For testing you can set the parameter --vault=dev to deploy vault in development mode. This will lead to

  • vault being transient, i.e. all changes during runtime are not persisted. Meaning a restart will reset to default.
  • Vault is initialized with some fixed secrets that are used in the example app, see bellow.
  • Vault authorization is initialized with service accounts used in example SecretStores for external secrets operator
  • Vault is initialized with the usual admin/admin account (can be overriden with --username and --password)

The secrets are then picked up by the vault-backend SecretStores (connects External Secrets Operator with Vault) in the namespace argocd-staging and argocd-production namespaces

You can reach the vault UI on

  • http://localhost:8200 (k3d)
  • scripts/get-remote-url vault-ui secrets (remote k8s)
  • You can log in vie the user account mentioned above.
    If necessary, the root token can be found on the log:
    kubectl logs -n secrets vault-0 | grep 'Root Token'

prod mode

When using vault=prod you'll have to initialize vault manually but on the other hand it will persist changes.

If you want the example app to work, you'll have to manually

  • set up vault, unseal it and
  • authorize the vault service accounts in argocd-production and argocd-staging namspaces. See SecretStores and dev-post-start.sh for an example.

Example app

With vault in dev mode and ArgoCD enabled, the example app applications/nginx/argocd/helm-jenkins will be deployed in a way that exposes the vault secrets secret/<environment>/nginx-secret via HTTP on the URL http://<host>/secret, for example http://localhost:30024/secret.

While exposing secrets on the web is a very bad practice, it's very good for demoing auto reload of a secret changed in vault.

To demo this, you could

  • change the staging secret
  • Wait for the change to show on the web, e.g. like so
while ; do echo -n "$(date '+%Y-%m-%d %H:%M:%S'): " ; \
  curl http://localhost:30024/secret/ ; echo; sleep 1; done

This usually takes between a couple of seconds and 1-2 minutes.
This time consists of ExternalSecret's refreshInterval, as well as the kubelet sync period (defaults to 1 Minute)

  • cache propagation delay

The following video shows this demo in time-lapse:

secrets-demo-video.mp4

Example Applications

Each GitOps operator comes with a couple of example applications that allow for experimenting with different GitOps features.

All applications are deployed via separated application and GitOps repos:

  • Separation of app repo (e.g. petclinic-plain) and GitOps repo (e.g. argocd/example-app or fluxv2/gitops)
  • Config is maintained in app repo,
  • CI Server writes to GitOps repo and creates PullRequests.

The applications implement a simple staging mechanism:

  • After a successful Jenkins build, the staging application will be deployed into the cluster by the GitOps operator.
  • Deployment of production applications can be triggered by accepting pull requests.
  • For some applications working without CI Server and committing directly to the GitOps repo is pragmatic
    (e.g. 3rd-party-application like NGINX, like argocd/nginx-helm-dependency)

app-repo-vs-gitops-repo

Note that for ArgoCD the GitOps-related logic is implemented in the gitops-build-lib for Jenkins. See the README there for more options like

  • staging,
  • resource creation,
  • validation (fail early / shift left).

Please note that it might take about a minute after the pull request has been accepted for the GitOps operator to start deploying. Alternatively you can trigger the deployment via the respective GitOps operator's CLI (flux) or UI (Argo CD)

Flux V2

PetClinic with plain k8s resources

Jenkinsfile

  • Staging
    • local: localhost:30010
    • remote: scripts/get-remote-url spring-petclinic-plain fluxv2-staging
  • Production
    • local: localhost:30011
    • remote: scripts/get-remote-url spring-petclinic-plain fluxv2-production

Argo CD

PetClinic with plain k8s resources

Jenkinsfile for plain deployment

  • Staging
    • local localhost:30020
    • remote: scripts/get-remote-url spring-petclinic-plain argocd-staging
  • Production
    • local localhost:30021
    • remote: scripts/get-remote-url spring-petclinic-plain argocd-production
PetClinic with helm

Jenkinsfile for helm deployment

  • Staging
    • local localhost:30022
    • remote: scripts/get-remote-url spring-petclinic-helm argocd-staging
  • Production
    • local localhost:30023
    • remote: scripts/get-remote-url spring-petclinic-helm argocd-production
3rd Party app (NGINX) with helm, templated in Jenkins

Jenkinsfile

  • Staging
    • local: localhost:30024
    • remote: scripts/get-remote-url nginx argocd-staging
  • Production
    • local: localhost:30025
    • remote: scripts/get-remote-url nginx argocd-production
3rd Party app (NGINX) with helm, using Helm dependency mechanism
  • local: localhost:30026
  • remote: scripts/get-remote-url nginx argocd-staging

Development

See docs/developers.md

More Repositories

1

k8s-diagrams

A collection of kubernetes-related diagrams
301
star
2

jenkinsfiles

Examples for jenkins pipelines, comparing scripted and declarative syntax
295
star
3

ces-build-lib

Jenkins pipeline shared library adding features for Maven, Gradle, Docker, SonarQube, Git and others
Groovy
70
star
4

k8s-security-demos

Demos for several kubernetes security features
Shell
60
star
5

ecosystem

Cloudogu Ecosystem is an open platform, which lets you choose how and where your team creates great software. Each service or tool is delivered as a Dลgu, a Docker container, that can be easily integrated in your environment just by pulling it from our registry.
Shell
42
star
6

command-bus

Java implementation of the Command-Bus pattern for Spring and CDI
Java
39
star
7

annotation-processors

An example on how to use Java Annotation Processors
Java
31
star
8

gitops-build-lib

Jenkins pipeline shared library for automating deployments via GitOps
Groovy
27
star
9

CIS-Ubuntu-18.04

CIS Benchmark for Ubuntu 18.04 with bats scripts
Shell
23
star
10

plantuml-cloudogu-sprites

This repository includes a set of logos from Cloudogu EcoSystem Dogus and other tools for using as Sprites in PlantUML.
23
star
11

smeagol

Store your technical documentation with in your git repository
Java
19
star
12

CIS-Ubuntu-20.04

CIS Benchmark for Ubuntu 20.04 with bats scripts
Shell
18
star
13

helm-sudo

A Helm plugin for running commands with the security privileges of another user
Shell
13
star
14

sonar-cas-plugin

CAS Authentication support for SonarQube
Java
13
star
15

gitops-patterns

Collection of patterns, examples and resources for GitOps process design, GitOps repository structures, etc
7
star
16

reveal.js-docker-example

Advanced example for using web-based slides/presentations via cloudogu/reveal.js-docker
Shell
7
star
17

helm-docker

Containerized Kubernetes Helm client with support for plugins
Shell
7
star
18

docker-pandoc

A docker container with Pandoc and LaTeX for building PDF files from markdown
Dockerfile
6
star
19

nexus-claim

Define nexus repository structure as code
Go
6
star
20

nexus-scripting

nexus-scripting provides an go api and a command line interface for the scripting api Sonatype Nexus 3
Go
5
star
21

continuous-delivery-slides

Continuously deliver your presentations to Kubernetes, Maven Sites (Nexus) or GitHub Pages
JavaScript
5
star
22

cas

CAS Server
Java
4
star
23

sudo-kubeconfig

Create a sudo kubeconfig for your current kubernetes context
Shell
4
star
24

spring-boot-helm-chart

A versatile helm chart that can be used to deploy images containing spring boot apps
Mustache
4
star
25

makefiles

Makefiles for cloudogu projects
Makefile
4
star
26

k8s-appops-security-talks

Resources for talks on Kubernetes appOps Security
CSS
4
star
27

k8s-ops-ansible

Setup a Kubernetes Cluster with Ansible
4
star
28

spotter

Content-Type and language recognition library
Java
4
star
29

sonar

SonarQube Dogu for the Cloudogu EcoSystem
Shell
4
star
30

jenkins

Jenkins Dogu for the Cloudogu EcoSystem
Makefile
3
star
31

nexus

Sonatype Nexus Dogu for Cloudogu EcoSystem
Shell
3
star
32

continuous-delivery-docs-example

Continous delivery of your docs as code to ODT & PDF
JavaScript
3
star
33

gitops-talks

Slides for talks on GitOps
Shell
3
star
34

ldap

OpenLDAP Dogu
Makefile
2
star
35

gatsby-transformer-keepachangelog

Gatsby transformer plugin for files in the keepachangelog format
JavaScript
2
star
36

docker-doc_template

An example for generating PDF from Markdown with cloudogu/pandoc container
TeX
2
star
37

nexus-carp

CAS Authentication Reverse Proxy for Sonatype Nexus
Makefile
2
star
38

k8s-service-discovery

Go
2
star
39

bugshot

Chrome extension to capture visual bugs and report them to redmine
TypeScript
2
star
40

jaxrs-tie

Generates link builder from JAX-RS annotation
Java
2
star
41

scm

SCM-Manager Dogu for Cloudogu EcoSystem
Groovy
2
star
42

carp

CARP - CAS Authentication Reverse Proxy
Go
2
star
43

nginx

Makefile
2
star
44

k8s-dogu-operator

Cloudogu EcoSystem Dogu operator for Kubernetes
Go
2
star
45

warp-menu

Warp menu
SCSS
2
star
46

spring-k8s-demo

Observability Demo with K8s and Spring
Java
1
star
47

swaggerui

Swagger UI Dogu for Cloudogu EcoSystem
Makefile
1
star
48

ces-commons

Makefile
1
star
49

redmine

Redmine Dogu for the Cloudogu EcoSystem
Shell
1
star
50

postgresql

PostgreSQL Dogu for Cloudogu EcoSystem
Makefile
1
star
51

nginx-static

A dogu providing a simple nginx webserver to provide static content
Makefile
1
star
52

docker-sample-ldap

Docker Image for ldap tests
Makefile
1
star
53

spinners

Go implementation of https://github.com/sindresorhus/cli-spinners
Go
1
star
54

terraform-provider-redmine

A Terraform provider that imports a Redmine project/ticket data into a running Redmine instance
Go
1
star
55

plantuml

PlantUML Server Dogu
Makefile
1
star
56

versionName

A tiny Java library that allows for conveniently reading the version name of an application
Java
1
star
57

usermgt

A simple user management application for the Cloudogu EcoSystem.
Java
1
star
58

ces-theme

Theme for CES WebApplications
HTML
1
star
59

ces-confd

Confd clone specialized for the needs of the Cloudogu EcoSystem
Go
1
star
60

jenkins-helm-image

An OCI image containing Jenkins bundled with the basic plugins needed during startup of Jenkins Helm Chart
Shell
1
star
61

k8s-intro-talk

Slides for an intro talk to kubernetes
CSS
1
star
62

confluence-temp-delete-job

A small tool that deletes files.
Makefile
1
star
63

k8s-etcd

Makefile
1
star
64

dogu-build-lib

Jenkins Pipeline Shared library for Cloudogu EcoSystem Dogus.
Groovy
1
star
65

go-health

Health checks for the go programming language
Go
1
star
66

k8s-apply-lib

A go library to apply generic resources to a Kubernetes cluster
Go
1
star
67

zalenium-build-lib

Jenkins shared library that provides a step that runs a temporary zalenium server that archives videos of selenium tests
Groovy
1
star
68

k8s-ecosystem

K8s Cloudogu Ecosystem is an open platform, running in Kubernetes, which lets you choose how and where your team creates great software. Each service or tool is delivered as a Dลgu, a Docker container, that can be easily integrated in your environment just by pulling it from our registry.
Shell
1
star