• Stars
    star
    297
  • Rank 139,203 (Top 3 %)
  • Language
    TypeScript
  • License
    BSD 3-Clause "New...
  • Created over 4 years ago
  • Updated 15 days ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A simple to use web-based OPA Gatekeeper policy manager

Gatekeeper Policy Manager (GPM)

Build Status GPM Release Helm Chart Release Slack License

Gatekeeper Policy Manager is a simple read-only web UI for viewing OPA Gatekeeper policies' status in a Kubernetes Cluster.

The target Kubernetes Cluster can be the same where GPM is running or some other remote cluster(s) using a kubeconfig file. You can also run GPM locally in a client machine and connect to a remote cluster.

GPM can display all the defined Constraint Templates with their rego code, all the Gatekeeper Configuration CRDs, and all the Constraints with their current status, violations, enforcement action, matches definitions, etc.

You can see some screenshots below.

Requirements

You'll need OPA Gatekeeper running in your cluster and at least some constraint templates and constraints defined to take advantage of this tool.

You can easily deploy Gatekeeper to your cluster using the (also open source) Kubernetes Fury OPA module.

Deploying GPM

Deploy using Kustomize

To deploy Gatekeeper Policy Manager to your cluster, apply the provided kustomization file running the following command:

kubectl apply -k .

By default, this will create a deployment and a service both with the name gatekeper-policy-manager in the gatekeeper-system namespace. We invite you to take a look into the kustomization.yaml file to do further configuration.

The app can be run as a POD in a Kubernetes cluster or locally with a kubeconfig file. It will try its best to autodetect the correct configuration.

Once you've deployed the application, if you haven't set up an ingress, you can access the web-UI using port-forward:

kubectl -n gatekeeper-system port-forward  svc/gatekeeper-policy-manager 8080:80

Then access it with your browser on: http://127.0.0.1:8080

Deploy using Helm

It is also possible to deploy GPM using the provided Helm Chart.

First create a values file, for example my-values.yaml, with your custom values for the release. See the chart's readme and the default values.yaml for more information.

Then, execute:

helm repo add gpm https://sighupio.github.io/gatekeeper-policy-manager
helm upgrade --install --namespace gatekeeper-system --set image.tag=v1.0.6 --values my-values.yaml gatekeeper-policy-manager gpm/gatekeeper-policy-manager

don't forget to replace my-values.yaml with the path to your values file.

Running locally

GPM can also be run locally using docker and a kubeconfig, assuming that the kubeconfig file you want to use is located at ~/.kube/config the command to run GPM locally would be:

docker run -v ~/.kube/config:/home/gpm/.kube/config -p 8080:8080 quay.io/sighup/gatekeeper-policy-manager:v1.0.6

Then access it with your browser on: http://127.0.0.1:8080

You can also run the flask app directly, see the development section for further information.

Configuration

GPM is a stateless application, but it can be configured using environment variables. The possible configurations are:

Environment Variable Name Description Default
GPM_SECRET_KEY The secret key used to generate tokens. Change this value in production. g8k1p3rp0l1c7m4n4g3r
KUBECONFIG Path to a kubeconfig file, if provided while running inside a cluster this configuration file will be used instead of the cluster's API.
GPM_LOG_LEVEL Log level (see python logging docs for available levels) INFO
GPM_AUTH_ENABLED Enable Authentication current options: "Anonymous", "OIDC" Anonymous
GPM_PREFERRED_URL_SCHEME URL scheme to be used while generating links. http
GPM_OIDC_REDIRECT_DOMAIN The domain where GPM is being running. This is where the client will be redirected after authenticating
GPM_OIDC_CLIENT_ID The Client ID used to authenticate against the OIDC Provider
GPM_OIDC_CLIENT_SECRET The Client Secret used to authenticate against the OIDC Provider (optional)
GPM_OIDC_ISSUER OIDC Issuer hostname (required if OIDC Auth is enabled)
GPM_OIDC_AUTHORIZATION_ENDPOINT OIDC Authorization Endpoint (optional, setting this parameter disables the discovery of the rest of the provider configuration, set all the other values also if setting this one)
GPM_OIDC_JWKS_URI OIDC JWKS URI (optional, setting this parameter disables the discovery of the rest of the provider configuration, set all the other values also if setting this one)
GPM_OIDC_TOKEN_ENDPOINT OIDC TOKEN Endpoint (optional, setting this parameter disables the discovery of the rest of the provider configuration, set all the other values also if setting this one)
GPM_OIDC_INTROSPECTION_ENDPOINT OIDC Introspection Endpoint (optional, setting this parameter disables the discovery of the rest of the provider configuration, set all the other values also if setting this one)
GPM_OIDC_USERINFO_ENDPOINT OIDC Userinfo Endpoint (optional, setting this parameter disables the discovery of the rest of the provider configuration, set all the other values also if setting this one)
GPM_OIDC_END_SESSION_ENDPOINT OIDC End Session Endpoint (optional, setting this parameter disables the discovery of the rest of the provider configuration, set all the other values also if setting this one)

⚠️ Please notice that OIDC Authentication is in beta state. It has been tested to work with Keycloak as a provider.

These environment variables are already provided and ready to be set in the manifests/enable-oidc.yaml file.

Multi-cluster support

Since v1.0.6 GPM has basic multi-cluster support when using a kubeconfig with more than one context. GPM will let you chose the context right from the UI.

If you want to run GPM in a cluster but with multi-cluster support, it's as easy as mounting a kubeconfig file in GPM's pod(s) with the cluster access configuration and set the environment variable KUBECONFIG with the path to the mounted kubeconfig file. Or you can simply mount it in /home/gpm/.kube/config and GPM will detect it automatically.

Please remember that the user for the clusters should have the right permissions. You can use the manifests/rabc.yaml file as reference.

Also note that the cluster where GPM is running should be able to reach the other clusters.

When you run GPM locally, you are already using a kubeconfig file to connect to the clusters, now you should see all your defined contexts and you can switch between them easily from the UI.

AWS IAM Authentication

If you want to use a Kubeconfig with IAM Authentication, you'll need to customize GPM's container image because the IAM authentication uses external AWS binaries that are not included by default in the image.

You can customize the container image with a Dockerfile like the following:

FROM curlimages/curl:7.81.0 as downloader
RUN curl https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/v0.5.5/aws-iam-authenticator_0.5.5_linux_amd64 --output /tmp/aws-iam-authenticator
RUN chmod +x /tmp/aws-iam-authenticator
FROM quay.io/sighup/gatekeeper-policy-manager:v1.0.6
COPY --from=downloader --chown=root:root /tmp/aws-iam-authenticator /usr/local/bin/

You may need to add also the aws CLI, you can use the same approach as before.

Make sure that your kubeconfig has the apiVersion set as client.authentication.k8s.io/v1beta1

You can read more in this issue.

Screenshots

welcome

Constraint Templates view

Constraint Templates view rego code

Constraint view

Constraint view 2

Constraint Report 3

Configurations view 2

Cluster Selector

Development

GPM is written in Python using the Flask framework for the backend and React with Elastic UI and the Fury theme for the frontend.

To develop GPM, you'll need to create a Python 3 virtual environment, install all the dependencies specified in the provided requirements.txt, build the react frontend and you are good to start hacking.

The following commands should get you up and running:

# Build frontend and copy over to static folder
$ pushd app/web-client
$ yarn install && yarn build
$ cp -r build/* ../static-content/
$ popd
# Create a virtualenv
$ python3 -m venv env
# Activate it
$ source ./env/bin/activate
# Install all the dependencies
$ pip install -r app/requirements.txt
# Run the development server
$ FLASK_APP=app/app.py flask run

Access to a Kubernetes cluster with OPA Gatekeeper deployed is recommended to debug the application.

You'll need an OIDC provider to test the OIDC authentication. You can use our fury-kubernetes-keycloak module.

Roadmap

The following is a wishlist of features that we would like to add to GPM (in no particular order):

  • List the constraints that are currently using a ConstraintTemplate
  • Polished OIDC authentication
  • LDAP authentication
  • Better syntax highlighting for the rego code snippets
  • Root-less docker image
  • Multi-cluster view
  • Minimal write capabilities?
  • Rewrite app in Golang?

Please, let us know if you are using GPM and what features would you like to have by creating an issue here on GitHub 💪🏻

More Repositories

1

permission-manager

Permission Manager is a project that brings sanity to Kubernetes RBAC and Users management, Web UI FTW
TypeScript
1,285
star
2

fury-distribution

Kubernetes Fury Distribution (Core Modules) - A battle-tested open-source Kubernetes distribution
Smarty
158
star
3

fury-kubernetes-keycloak

Kubernetes Fury Distribution Keycloak Add-on Module: Keycloak identity provider for your Cluster
Shell
59
star
4

fury-kubernetes-monitoring

Kubernetes Fury Distribution Monitoring core module: Monitor the status of your Kubernetes Cluster and its applications
Shell
47
star
5

fury-kubernetes-opa

Kubernetes Fury Distribution OPA Core Module: Policy enforcement for your Kubernetes Cluster
Shell
38
star
6

furyctl

furyctl is the KFD (Kubernetes Fury Distribution) lifecycle manager
Go
33
star
7

container-signature-enforcer

Open Policy Agent
27
star
8

trivy-offline

Trivy offline builder. Fits perfectly in your CI System
Dockerfile
16
star
9

fury-kubernetes-logging

Kubernetes Fury Distribution Logging Core Module: centralized logging for your Kubernetes Cluster
Shell
16
star
10

fury-kubernetes-aws

Kubernetes Fury Distribution AWS Add-on Module: additional components for EKS-based clusters on AWS
HCL
15
star
11

fury-getting-started

Getting started guides to deploy the Kubernetes Fury Distribution (KFD) in different environments
HCL
13
star
12

fury-images

Support & Utilities container Images built and used by SIGHUP
Dockerfile
12
star
13

hnc-example-use-cases

Hierarchical Namespace Controller: Example use cases - Material for blog post https://blog.sighup.io/an-introduction-to-hierarchical-namespace-controller-hnc/
11
star
14

workshop-istio

Istio Workshop by SIGHUP. From zero to basics
10
star
15

furyagent

Go
9
star
16

fury-kubernetes-networking

Kubernetes Fury Distribution Networking Core Module: CNI and Network management features for Kubernetes Clusters
Shell
9
star
17

fury-kubernetes-dr

Kubernetes Fury Distribution Disaster Recovery Core Module: backups and disaster recovery for your Kubernetes Cluster
HCL
8
star
18

fury-kubernetes-ingress

Kubernetes Fury Distribution Ingress Core Module: route traffic to your applications
Shell
8
star
19

fury-kubernetes-on-premises

Kubernetes Fury Distribution On-Premises Core Module: Create on-prem Kubernetes Clusters
Jinja
7
star
20

gangplank

Gangplank is a Kubernetes UI to get a working kubeconfig via oidc
Go
7
star
21

fury-kubernetes-service-mesh

Kubernetes Fury Distribution Istio Add-on Module
Shell
6
star
22

fury-kubernetes-registry

Kubernetes Fury Registry. Harbor deployment in your Kubernetes Cluster
Shell
5
star
23

fury-kubernetes-storage

Kubernetes Fury Storage module
5
star
24

fury-eks-installer

Fury Kubernetes Installer - EKS (AWS Kubernetes Managed Service)
HCL
5
star
25

asdf-furyctl

asdf furyctl plugin
Shell
4
star
26

workshop-material

Workshop material for the CKA, CKAD and CKS workshops
Shell
4
star
27

fury-connect-switch

Fury Connect Switch repository
TypeScript
3
star
28

fury-dashboard

TypeScript
3
star
29

blog-posts-example

Support content for https://blog.sighup.io/ articles
Makefile
3
star
30

fury-kubernetes-machine-images

Fury Kubernetes Machine images
HCL
3
star
31

fury-distribution-container-image-sync

This is a simple mechanism that pulls and pushes container images based on a configuration file (yaml).
Shell
3
star
32

webinar-kong-on-kubernetes

Kong su Kubernetes - in questa repository raccogliamo tutto il necessario per iniziare ad utilizzare e monitorare Kong su Kubernetes
3
star
33

fury-gke-installer

Fury Kubernetes Installer - GKE (GCP Kubernetes Managed Service)
HCL
2
star
34

fury-kubernetes-jenkins

Fury Kubernetes Jenkins
Jinja
2
star
35

fury-kubernetes-oci

Fury Kubernetes Oracle Cloud Infrastructure
HCL
2
star
36

homebrew-furyctl

Hombrew tap repository for Furyctl
Ruby
2
star
37

k8s-conformance-environment

Creates an empty Kubernetes Cluster where to run CNCF Distribution Conformance tests
HCL
2
star
38

md-gen

Generate md files from JSON schema
Go
2
star
39

kube-apiserver-proxy

Go
2
star
40

postgres-status-check

An FIP health check to monitor the health of postgres services
Go
1
star
41

fury-kubernetes-auth

Kubernetes Fury Distribution Auth Core Module: improved auth for your Kubernetes Cluster and its applications
CSS
1
star
42

fury-kubernetes-kong

Fury Kubernetes Kong - Add Kong 🦍 ingress controller to your cluster
Shell
1
star
43

service-endpoints-check

An FIP healthcheck that verifies the number of endpoints of a Kubernetes service.
Go
1
star
44

fury-kubernetes-kafka

Kubernetes Fury Distribution Kafka Add-on Module: Apache Kafka event streaming for your Cluster
1
star
45

homebrew-furyagent

Homebrew tap repo for furyagent
Ruby
1
star
46

fip-results-controller

Fury Intelligent Platform Result Controller - Creates a custom resource for each check with useful information about the check
Go
1
star
47

fury-aks-installer

Fury Kubernetes Installer - AKS (Microsoft Azure Kubernetes Managed Service)
HCL
1
star
48

go-jsonschema

A tool to generate Go data types from JSON Schema definitions.
Go
1
star