• Stars
    star
    141
  • Rank 259,971 (Top 6 %)
  • Language
    Go
  • License
    MIT License
  • Created over 4 years ago
  • Updated 11 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A lightweight tool for git-based management of Kubernetes configs

kubeapply

Contents

Overview

kubeapply is a lightweight tool for git-based management of Kubernetes configs. It supports configuration in raw YAML, templated YAML, Helm charts, and/or skycfg, and facilitates the complete change workflow including config expansion, validation, diff generation, and applying.

It can be run from either the command-line or in a webhooks mode that responds interactively to Github pull request events:

Motivation

Managing Kubernetes configuration in a large organization can be painful. Existing tools like Helm are useful for certain parts of the workflow (e.g., config generation), but are often too heavyweight for small, internal services. Once configs are generated, it's hard to understand what impact they will have in production and then apply them in a consistent way.

We built kubeapply to make the end-to-end config management process easier and more consistent. The design choices made were motivated by the following goals:

  1. Support git-based workflows. i.e., can look at a repo to understand the state of a cluster
  2. Make it easy to share configuration between different environments (e.g., staging vs. production) and cluster types
  3. Wrap existing tooling (kubectl, helm, etc.) whenever possible as opposed to reimplementing their functionality
  4. Allow running on either the command-line or in Github
  5. Support Helm charts, simple templates, and skycfg

See this blog post for more details.

Disclaimer

The tool is designed for our Kubernetes-related workflows at Segment. While we hope it can work for others, not all features might be directly applicable to other environments. We welcome feedback and collaboration to make kubeapply useful to more people!

🆕 Terraform version of kubeapply

We recently open-sourced a Terraform-provider-based version of this tool; see the repository and documentation in the Terraform registry for more details.

Note that the Terraform version has slightly different interfaces and assumptions (e.g., no support for Helm charts), so it's not a drop-in replacement for the tooling here, but it follows the same general flow and configuration philosophy.

Getting started

Prerequisites

kubeapply depends on the following tools:

  • kubectl: v1.16 or newer
  • helm: v3.5.0 or newer (only needed if using helm charts)

Make sure that they're installed locally and available in your path.

Installing

Install the kubeapply binary by running:

go install github.com/segmentio/kubeapply/cmd/kubeapply@latest

You can also build and install the binary by running make install in the root of this repo.

Quick tour

See the README in the examples/kubeapply-test-cluster directory.

Configuration

Repo layout

Each cluster type to be managed by kubeapply lives in a directory in a source-controlled repo. This directory has a set of cluster configs and a profile that is shared by the former. A cluster config plus the profile is expanded by kubeapply to create a set of expanded, kubectl-compatible YAML configs for the cluster. Although these expanded configs can be derived in a fully reproducible way from the cluster configs and profile, they're typically checked-in to the repo for easier code review.

The following diagram shows the recommended directory layout:

.
└── clusters
    └── [cluster type]
        ├── expanded
        |   ├── ...
        ├── profile
        │   ├── [namespace 1]
        │   │   ├── [config 1]
        │   │   ├── [config 2]
        │   ├── [namespace 2]
        │   ├── ...
        ├── [cluster config1]
        └── [cluster config2]

Each of the subcomponents is described in more detail in the sections below. See also examples/kubeapply-test-cluster for a full example.

Cluster config

Each cluster instance is configured in a single YAML file. Typically, these instances will vary by the environment (e.g., staging vs. production) and/or the region/account in which they're deployed, but will share the same profile. At Segment, we name the files by the environment and region, e.g., stage-us-west-2.yaml, production-us-west-2.yaml, but you can use any naming convention that feels comfortable.

Each cluster config has the following format:

# Basic information about the cluster. The combination of these should uniquely identify
# a single cluster instance running in a single location.
cluster: my-cluster    # Name of the cluster
region: us-west-2      # Region in which the cluster is running
env: staging           # Environment/account in which the cluster is running

# Where charts can be found by default. Only required if using Helm chart sources.
# See the section below for the supported URL formats.
charts: "file://../../charts"

# Arbitrary parameters that can be used in templates, Helm charts, and skycfg modules.
#
# These are typically used for things that will vary by cluster instance and/or will
# frequently change, e.g. the number of replicas for deployments, container image URIs, etc.
parameters:
  service1:
    imageTag: abc123
    replicas: 2

  service2:
    imageTag: def678
    replicas: 5
  ...

Profile

The profile directory contains source files that are used to generate Kubernetes configs for a specific cluster. By convention, these files are organized into subdirectories by namespace, and can be further subdivided below that.

The tool currently supports four kinds of input source configs, described in more detail below.

(1) Raw YAML

Files of the form [name].yaml will be treated as normal YAML files and copied to the expanded directory as-is.

(2) Templated YAML

Files with names ending in .gotpl.yaml will be templated using the golang text/template package with the cluster config as the input data. You can also use the functions in the sprig library.

See this file for an example.

Note that template expansion happens before Helm chart evaluation, so you can template Helm value files as well.

(3) Helm chart values

Files named [chart name].helm.yaml will be treated as values files for the associated chart. The chart will be expanded using helm template ... and the outputs copied into the expanded directory. See this file for an example (which references the envoy chart).

By default, charts are sourced from the URL set in the cluster config charts parameter. Currently, the tool supports URLs of the form file://, http://, https://, git://, git-https://, and s3://.

You can override the source for a specific chart by including a # charts: [url] comment at the top of the values file. This is helpful for testing out a new version for just one chart in the profile.

(4) Skycfg/starlark modules

Files ending in .star will be evaluated using the skycfg framework to generate one or more Kubernetes protobufs. The latter are then converted to kubectl-compatible YAML and copied into the expanded directory.

Skycfg uses the Starlark language along with typed Kubernetes structs (from Protocol Buffers), so it can provide more structure and less repetition than YAML-based sources. See this file for an example.

The skycfg support in kubeapply is experimental and unsupported.

Expanded configs

The expanded directory contains the results of expanding out the profile for a cluster instance. These configs are pure YAML that can be applied directly via kubectl apply or, preferably, using the kubeapply apply command (described below).

Usage (CLI)

Expand

kubeapply expand [path to cluster config]

This will expand out all of the configs for the cluster instance, and put them into a subdirectory of the expanded directory. Helm charts are expanded via helm template; other source types use custom code in the kubeapply binary.

Validate

kubeapply validate [path to cluster config] --policy=[path to OPA policy in rego format]

This validates all of the expanded configs for the cluster using the kubeconform library. It also, optionally, supports validating configs using one or more OPA policies in rego format; see the "Experimental features" section below for more details.

Diff

kubeapply diff [path to cluster config] --kubeconfig=[path to kubeconfig]

This wraps kubectl diff to show a diff between the expanded configs on disk and the associated resources in the cluster.

Apply

kubeapply apply [path to cluster config] --kubeconfig=[path to kubeconfig]

This wraps kubectl apply, with some extra logic to apply in a "safe" order (e.g., configmaps before deployments, etc.).

Usage (Github webhooks)

In addition to interactions through the command-line, kubeapply also supports an Atlantis-inspired, Github-based workflow for the diff and apply steps above. This allows the diffs to be more closely reviewed before being applied, and also ensures that the configuration in the repo stays in-sync with the cluster.

Workflow

The end-to-end user flow is fairly similar to the one used with Atlantis for Terraform changes:

  1. Team member changes a cluster config or profile file in the repo, runs kubeapply expand locally
  2. A pull request is opened in Github with the changes
  3. Kubeapply server gets webhook from Github, posts a friendly "help" message and then runs an initial diff
  4. PR owner iterates on change, gets it reviewed
  5. When ready, PR owner posts a kubeapply apply comment
  6. Kubeapply server gets webhook, checks that change has green status and is approved, then applies it
  7. If all changes have been successfully applied, change is automatically merged

Backend

Using the Github webhooks flow requires that you run an HTTP service somewhere that is accessible to Github. Since the requests are sporadic and can be handled without any local state, processing them is a nice use case for a serverless framework like AWS Lambda, and this is how we run it at Segment. Alternatively, you can run a long-running server that responds to the webhooks.

The sections below contain some implementation details for each option.

Option 1: Run via AWS Lambda

The exact setup steps will vary based on your environment and chosen tooling. At a high-level, however, the setup process is:

  1. Build a lambda bundle by running make lambda-zip
  2. Upload the bundle zip to a location in S3
  3. Generate a Github webhook token and a shared secret that will be used for webhook authentication; store these in SSM
  4. Using Terraform, the AWS console, or other tooling of your choice, create:
    1. An-externally facing ALB
    2. An IAM role for your lambda function that has access to the zip bundle in S3, secrets in SSM, etc.
    3. A security group for your lambda function that has access to your cluster control planes
    4. A lambda function that runs the code in the zip bundle when triggered by ALB requests

The lambda is configured via a set of environment variables that are documented in the lambda entrypoint. We use SSM for storing secrets like Github tokens, but it's possible to adapt the code to get these from other places.

Option 2: Run via long-running server

We've provided a basic server entrypoint here. Build a binary via make kubeapply-server, configure and deploy this on your infrastructure of choice, and expose the server to the Internet.

Github configuration

Once you have an externally accessible webhook URL, go to the settings for your repo and add a new webhook:

In the "Event triggers" section, select "Issue comments" and "Pull requests" only. Then, test it out by opening up a new pull request that modifies an expanded kubeapply config.

Experimental features

kubestar

This repo now contains an experimental tool, kubestar, for converting YAML to skycfg-compatible starlark. See this README for details.

Multi-profile support

The cluster config now supports using multiple profiles. Among other use cases, this is useful if you want to share profile-style YAML templates across multiple clusters without dealing with Helm.

To use this, add a profiles section to the cluster config:

cluster: my-cluster
...
profiles:
  - name: [name of first profile]
    url: [url for first profile]
  - name: [name of second profile]
    url: [url for second profile]
  ...

where the urls are in the same format as those for Helm chart locations, e.g. file://path/to/my/file. The outputs of each profile will be expanded into [expanded dir]/[profile name]/....

OPA policy checks

The kubeapply validate subcommand now supports checking expanded configs against policies in Open Policy Agent (OPA) format. This can be helpful for enforcing organization-specific standards, e.g. that images need to be pulled from a particular private registry, that all labels are in a consistent format, etc.

To use this, write up your policies as .rego files as described in the OPA documentation and run the former subcommand with one or more --policy=[path to policy] arguments. By default, policies should be in the com.segment.kubeapply package. Denial reasons, if any, are returned by setting a deny variable with a set of denial reason strings. If this set is empty, kubeapply will assume that the config has passed all checks in the policy file.

If a denial reason begins with the string warn:, then that denial will be treated as a non-blocking warning as opposed to an error that causes validation to fail.

See this unit test for some examples.

Testing

Unit tests

Run make test in the repo root.

On Github changes

You can simulate Github webhook responses by running kubeapply with the pull-request subcommand:

kubeapply pull-request \
  --github-token=[personal token] \
  --repo=[repo in format owner/name] \
  --pull-request=[pull request num] \
  --comment-body="kubeapply help"

This will respond locally using the codepath that would be executed in response to a Github webhook for the associated repo and pull request.

More Repositories

1

evergreen

🌲 Evergreen React UI Framework by Segment
JavaScript
12,161
star
2

kafka-go

Kafka library in Go
Go
7,518
star
3

analytics.js

The hassle-free way to integrate analytics into any web application.
JavaScript
4,775
star
4

myth

A CSS preprocessor that acts like a polyfill for future versions of the spec.
JavaScript
4,345
star
5

ksuid

K-Sortable Globally Unique IDs
Go
4,121
star
6

daydream

A chrome extension to record your actions into a nightmare or puppeteer script
JavaScript
2,768
star
7

chamber

CLI for managing secrets
Go
2,283
star
8

stack

A set of Terraform modules for configuring production infrastructure with AWS
HCL
2,098
star
9

ui-box

Blazing Fast React UI Primitive
TypeScript
1,052
star
10

encoding

Go package containing implementations of efficient encoding, decoding, and validation APIs.
Go
911
star
11

golines

A golang formatter that fixes long lines
Go
803
star
12

asm

Go library providing algorithms optimized to leverage the characteristics of modern CPUs
Go
795
star
13

analytics-node

The hassle-free way to integrate analytics into any node application.
JavaScript
593
star
14

topicctl

Tool for declarative management of Kafka topics
Go
558
star
15

aws-okta

aws-vault like tool for Okta authentication
Go
541
star
16

niffy

Perceptual diffing suite built on Nightmare
JavaScript
535
star
17

analytics-ios

The hassle-free way to integrate analytics into any iOS application.
Objective-C
388
star
18

analytics-ruby

The hassle-free way to integrate analytics into any Ruby application.
Ruby
374
star
19

analytics-android

The hassle-free way to add analytics to your Android app.
Java
373
star
20

analytics-react-native

The hassle-free way to add analytics to your React-Native app.
TypeScript
337
star
21

consent-manager

Drop-in consent management plugin for analytics.js
TypeScript
326
star
22

parquet-go

Go library to read/write Parquet files
Go
314
star
23

ts-mysql-plugin

A typescript language service plugin that gives superpowers to SQL tagged template literals.
TypeScript
312
star
24

analytics-next

Segment Analytics.js 2.0
TypeScript
294
star
25

specs

Peer into your ECS clusters
JavaScript
273
star
26

fasthash

Go package porting the standard hashing algorithms to a more efficient implementation.
Go
261
star
27

ctlstore

Control Data Store
Go
261
star
28

ware

Easily create your own middleware layer.
JavaScript
254
star
29

analytics-php

The hassle-free way to integrate analytics into any php application.
PHP
252
star
30

analytics-python

The hassle-free way to integrate analytics into any python application.
Python
231
star
31

chrome-sidebar

Easiest way to embed an iframe as a chrome extension
JavaScript
208
star
32

typewriter

Type safety + intellisense for your Segment analytics
TypeScript
206
star
33

nsq.js

NSQ client for nodejs
JavaScript
203
star
34

stats

Go package for abstracting stats collection
Go
202
star
35

threat-modeling-training

Segment's Threat Modeling training for our engineers
197
star
36

in-eu

🇪🇺 privacy first EU detection library for browsers
JavaScript
180
star
37

kubectl-curl

Kubectl plugin to run curl commands against kubernetes pods
Go
167
star
38

go-prompt

Go terminal prompts.
Go
167
star
39

analytics-react

[DEPRECATED AND UNSUPPORTED] The hassle-free way to integrate analytics into your React application.
JavaScript
160
star
40

is-url

Loosely validate a URL.
JavaScript
160
star
41

cwlogs

CLI tool for reading logs from Cloudwatch Logs
Go
142
star
42

analytics-go

Segment analytics client for Go
Go
136
star
43

analytics.js-core

The hassle-free way to integrate analytics into any web application.
TypeScript
132
star
44

dependency-report

Generate usage reports of your JS dependencies
JavaScript
129
star
45

ecs-logs

Log forwarder for services ran by ecs-agent.
Go
115
star
46

analytics-java

The hassle-free way to integrate analytics into any java application.
Java
113
star
47

analytics.js-integrations

Monorepo housing Segment's analytics.js integrations
JavaScript
112
star
48

go-athena

Golang database/sql driver for AWS Athena
Go
107
star
49

Analytics.NET

The hassle-free way to integrate analytics into any C# / .NET application.
C#
107
star
50

go-queue

NSQ consumer convenience layer.
Go
104
star
51

analytics-swift

The hassle-free way to add Segment analytics to your Swift app (iOS/tvOS/watchOS/macOS/Linux).
Swift
102
star
52

xml-parser

simple non-compliant xml parser for nodejs
JavaScript
101
star
53

backo

exponential backoff without the weird cruft
JavaScript
99
star
54

analytics-vue

The hassle-free way to integrate analytics into your Vue application.
Vue
98
star
55

nsq-go

Go package providing tools for building NSQ clients, servers and middleware.
Go
94
star
56

consul-go

Go package providing building blocks for interacting with Consul.
Go
90
star
57

frictionless-signup

Reduce friction and increase customer data in your online forms using Segment & Clearbit
JavaScript
86
star
58

superagent-retry

Retry superagent requests for common hangups
JavaScript
85
star
59

pg-escape

sprintf-style postgres query escaping and helper functions
JavaScript
84
star
60

conf

Go package for loading program configuration from multiple sources.
Go
81
star
61

orbital

🚀🌏 A simple end-to-end testing framework for Go
Go
80
star
62

functions-library

A library of example functions to use with the Segment Developer Center
JavaScript
75
star
63

inbound

A url and referrer parsing library for node.
JavaScript
72
star
64

decibel

A small iOS app for recording office noise dB levels to Datadog.
Swift
69
star
65

analytics-angular

The hassle-free way to integrate analytics into your Angular application.
TypeScript
68
star
66

events

Go package for routing, formatting and publishing events produced by a program.
Go
62
star
67

glue

Generate typed Golang RPC clients from server code
Go
60
star
68

pingdummy

Example application for segmentio/stack
JavaScript
60
star
69

go-loggly

Loggly client for Go
Go
59
star
70

analytics-rust

Segment analytics client for Rust
Rust
55
star
71

retrofit-jsonrpc

Json-RPC with Retrofit.
Java
54
star
72

snippet

Render the analytics.js snippet.
JavaScript
53
star
73

nsq_to_redis

NSQ ✈ Redis {pubsub, capped lists}
Go
52
star
74

segment-proxy

Proxies requests to the Segment CDN and Tracking API.
Go
51
star
75

statsy

Simple statsd client for nodejs
JavaScript
49
star
76

sherlock

A pluggable service-detection tool
JavaScript
49
star
77

is-email

Component: loosely validate an email address.
JavaScript
49
star
78

objconv

A Go package exposing encoder and decoders that support data streaming to and from multiple formats.
Go
49
star
79

cli

Go package providing high-level constructs for command-line tools.
Go
48
star
80

facade

Providing common fields for analytics integrations, since 2013.
JavaScript
47
star
81

agecache

An LRU cache with support for max age
Go
47
star
82

validate-form

Easily validate a form element against a set of rules.
JavaScript
44
star
83

go-stats

Go stats ticker utility
Go
44
star
84

go-snakecase

Faster snakecase implementation
Go
43
star
85

utm-params

parse and get all utm parameters
JavaScript
42
star
86

aws-billing

An API to learn how much your AWS hosting costs every month
JavaScript
39
star
87

action-destinations

Action Destinations are the new way to build streaming destinations on Segment.
TypeScript
38
star
88

testdemo

Examples for https://segment.com/blog/5-advanced-testing-techniques-in-go/
Go
38
star
89

data-digger

Dig through structured messages in Kafka, S3, or local files
Go
37
star
90

segment-docs

Segment Documentation. Powered by Jekyll.
HTML
36
star
91

feature

Feature gate database designed for simplicity and efficiency.
Go
36
star
92

redis-go

Go package providing tools for building redis clients, servers and middleware.
Go
36
star
93

http_to_nsq

Publishes HTTP requests to NSQD (for CI webhooks etc)
Go
36
star
94

analytics.js-integration

The base integration factory used to create custom analytics integrations for analytics.js.
JavaScript
35
star
95

ebs-backup

Backup EBS Volumes
Go
34
star
96

Analytics.Xamarin

Analytics for Xamarin, a portable class library supporting iOS, Android, Mac OS, and others.
C#
34
star
97

go-hll

Go implementation of HLL that plays nicely with other languages
Go
34
star
98

terraform-segment-data-lakes

Terraform modules which create AWS resources for a Segment Data Lake.
HCL
34
star
99

analytics-kotlin

The hassle-free way to add Segment analytics to your Kotlin app (Android/JVM).
Kotlin
32
star
100

errors-go

Go package providing various error handling primitives.
Go
32
star