• This repository has been archived on 15/Jun/2018
  • Stars
    star
    106
  • Rank 324,147 (Top 7 %)
  • Language
    Shell
  • Created almost 8 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Simple demo using minikube to deploy Prometheus and Grafana

Minikube and Prometheus Demo

This is a quick demo of using minikube to test Prometheus. This is meant to familiarize people with working with minikube, kubectl, prometheus, and grafana.

Prerequisites

Note: this has been developed and tested on OS X. Others should be similar.

Bootstrap

Install the prerequisites.

If using the hyperkit driver for minikube, you should be able to create and start a single node, local Kubernetes cluster by running: minikube start --vm-driver=hyperkit. If not using the hyperkit driver, just run minikube start. See the minikube docs for more information.

You can check that the node is up and running by running: minikube status. You should see something like:

minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.64.48

If you want to stop the cluster, run minikube stop. To start it, run minikube start. Note: you only need to pass the driver argument the first time you create the cluster (or if you destroy and recreate it).

You can run minikube dashboard and a browser window should open with the Kubernetes Dashboard running. You may to click around to familiarize yourself with it.

Run kubectl cluster-info and you should see something like:

kubernetes master is running at https://192.168.64.2:8443
kubernetes-dashboard is running at https://192.168.64.2:8443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard

To select the minikube local cluster for kubectl - useful if you are using multiple clusters - then run kubectl config use-context minikube

Monitoring Namespace

We are going to install the monitoring components into a "monitoring" namespace. While this is not necessary, it does show "best practices" in organizing applications by namespace rather than deploying everything into the default namespace.

First, create the monitoring namespace: kubectl apply -f monitoring-namespace.yaml.

You can now list the namespaces by running kubectl get namespaces and you should see something similar to:

NAME          STATUS    AGE
default       Active    6d
kube-system   Active    6d
monitoring    Active    3d

Deploying Prometheus and Grafana

Let's step through deploying Prometheus. The configuration we will use is based on a blog post by CoreOS and the example configuration included with Prometheus.

Prometheus Configuration

Prometheus will get its configuration from a Kubernetes ConfigMap. This allows us to update the configuration separate from the image. Note: there is a large debate about whether this is a "good" approach or not, but for demo purposes this is fine.

Look at prometheus-config.yaml. The relevant part is in data/prometheus.yml. This is just a Prometheus configuration inlined into the Kubernetes manifest. Note that we are using the in-cluster Kubernetes service account to access the Kubernetes API.

To deploy this to the cluster run kubectl apply -f prometheus-config.yaml. You can view this by running kubectl get configmap --namespace=monitoring prometheus-config -o yaml. You can also see this in the Kubernetes Dashboard.

Prometheus Pod

We will use a single Prometheus pod for this demo. Take a look at prometheus-deployment.yaml. This is a Kubernetes Deployment that describes the image to use for the pod, resources, etc. Note:

  • In the metadata section, we give the pod a label with a key of name and a value of prometheus. This will come in handy later.
  • In annotations, we set a couple of key/value pairs that will actually allow Prometheus to autodiscover and scrape itself.
  • We are using an emptyDir volume for the Prometheus data. This is basically a temporary directory that will get erased on every restart of the container. For a demo this is fine, but we'd do something more persistent for other use cases.

Deploy the deployment by running kubectl apply -f prometheus-deployment.yaml. You can see this by running kubectl get deployments --namespace=monitoring.

Prometheus Service

Now that we have Prometheus deployed, we actually want to get to the UI. To do this, we will expose it using a Kubernetes Service.

In prometheus-service.yaml, there are a few things to note:

  • The label selector searches for pods that have been labeled with name: prometheus as we labeled our pod in the deployment.
  • We are exposing port 9090 of the running pods.
  • We are using a "NodePort." This means that Kubernetes will open a port on each node in our cluster. You can query the API to get this port.

Create the service by running kubectl apply -f prometheus-service.yaml. You can then view it by running kubectl get services --namespace=monitoring prometheus -o yaml.

One thing to note is that you will see something like nodePort: 30827 in the output. We could access the service on that port on any node in the cluster. Minikube comes with a helper to do just that, just run minikube service --namespace=monitoring prometheus and it will open a browser window accessing the service.

From the Prometheus console, you can explore the metrics is it collecting and do some basic graphing. You can also view the configuration and the targets. Click Status->Targets and you should see the Kubernetes cluster and nodes. You should also see that Prometheus discovered itself under kubernetes-pods

Deploying Grafana

You can deploy grafana by creating its deployment and service by running kubectl apply -f grafana-deployment.yaml and kubectl apply -f grafana-service.yaml. Feel free to explore via the kubectl command line and/or the Dashboard.

Go to grafana by running minikube service --namespace=monitoring grafana. Username is admin and password is also admin.

Let's add Prometheus as a datasource.

  • Click on the icon in the upper left of grafana and go to "Data Sources".
  • Click "Add data source".
  • For name, just use "prometheus"
  • Select "Prometheus" as the type
  • For the URL, we will actual use Kubernetes DNS service discovery. So, just enter http://prometheus:9090. This means that grafana will lookup the prometheus service running in the same namespace as it on port 9090.

Create a New dashboard by clicking on the upper-left icon and selecting Dashboard->New. Click the green control and add a graph panel. Under metrics, select "prometheus" as the datasource. For the query, use sum(container_memory_usage_bytes) by (pod_name). Click save. This graphs the memory used per pod.

Prometheus Node Explorer

We can also use Prometheus to collect metrics of the nodes themselves. We use the node exporter for this. We can also use Kubernetes to deploy this to every node. We will use a Kubernetes DaemonSet to do this.

In node-exporter-daemonset.yml you will see that it looks similar to the deployment we did earlier. Notice that we run this in privileged mode (privileged: true) as it needs access to various information about the node to perform monitoring. Also notice that we are mounting in a few node directories to monitor various things.

Run kubectl apply -f node-exporter-daemonset.yml to create the daemon set. This will run an instance of this on every node. In minikube, there is only one node, but this concept scales to thousands of nodes.

You can verify that it is running by using the command line or the dash board.

After a minute or so, Prometheus will discover the node itself and begin collecting metrics from it. To create a dashboard in grafana using node metrics, follow the same procedure as before but use node_load1 as the metric query. This will be the one minute load average of the nodes.

Note: in a "real" implementation, we would label the pods in an easily queryable pattern.

To cleanup, you can delete the entire monitoring namespace kubectl delete namespace monitoring

More Repositories

1

php-fpm-exporter

Prometheus exporter for php-fpm status.
Go
181
star
2

kubernetes-envoy-example

Teaching myself about Envoy on Kubernetes
Go
120
star
3

kubernetes-coreos-terraform

Simple Kubernetes cluster on CoreOS in AWS using Terraform
HCL
113
star
4

omnibus-nginx

Omnibus packaging for Openresty/nginx
Ruby
36
star
5

cookbook-gdebi

Simple cookbook that wraps gdebi
Ruby
34
star
6

lua-resty-riak

Lua riak protocol buffer client driver for the ngx_lua based on the cosocket API
Lua
25
star
7

gearman-exporter

Gearmand metrics exporter for Prometheus
Go
21
star
8

alertmanager-webhook-example

simple webhook receiver for Prometheus Alertmanager
Go
18
star
9

zocker

bocker with ZFS: ZFS based "docker" implementation in bash
Shell
16
star
10

lua-resty-beanstalkd

Simple Beanstalkd client for nginx/openresty
Lua
15
star
11

kube-log-tail

Deprecated. I now use https://github.com/wercker/stern
Go
15
star
12

ansible-runit

Ansible role for runit
Python
13
star
13

nginx-example-lua-module

C
13
star
14

logrus-middleware

Simple logging middleware for Go net/http
Go
12
star
15

zfsd

Simple HTTP interface for ZFS
Go
12
star
16

k8s-client

Simple Kubernetes client for Go
Go
11
star
17

knife-fifo

Project Fifo Support for Chef's Knife Command
Ruby
11
star
18

vagrant-chef-apply

Simple Vagrant provisioner using chef-apply
Ruby
10
star
19

stardust

Simple OpenResty web framework
Lua
8
star
20

grpc-fastcgi-proxy

WIP: grpc to fastcgi proxy
Go
7
star
21

zfs-flex-volume

Simple ZFS flexVolume driver for Kubernetes
Go
7
star
22

terraform-provider-etcd

etcd provider for Terraform
Go
7
star
23

alertmanager-config-controller

Construct an Alertmanager config from snippets stored in Kubernetes ConfigMaps
Go
6
star
24

terraform-provider-coreos

Terraform Provider for getting the latest CoreOS AMI
Go
6
star
25

protoc-gen-php-grpc

Experimental PHP gRPC server generation
PHP
6
star
26

project-fifo-ruby

Basic Ruby client for project-fifo
Ruby
6
star
27

iap-token-source

Go oauth2 Token Source for client authentication with GCP Identity Aware Proxy
Go
5
star
28

terraform-provider-kubernetes

Placeholder for a Kubernetes provider for Terraform
Go
5
star
29

json-rpc-example

Example JSON-RPC over HTTP server in Go using Gorilla
Go
5
star
30

k8s-nginx

Simple nginx proxy for kubernetes api
Shell
4
star
31

kitchen-fifo

Project-FIFO driver for Kitchen
Ruby
4
star
32

lua-resty-logstash-logger

Simple logger for ngx_lua for logging directly to logstash
Lua
4
star
33

diary

Simple Log package for Go
Go
3
star
34

vagrant-fifo

Vagrant provider for Project-FiFo
Ruby
3
star
35

lua-resty-redis-lock

Simple locking mechanism for redis in nginx/Lua
Lua
3
star
36

ngx_http_filter_cache

C
3
star
37

openresty-buildpack

Simple buildpack for openresty apps
Shell
3
star
38

slogotlp

OPenTelemetry otlp exporter for slog
Go
3
star
39

openresty-example-app

Example app using openresty-buildpack
Ruby
3
star
40

prometheus-http-discovery

Prometheus sidecar to discover targets by calling HTTP endpoints
Go
3
star
41

statsd-exporter-convert

Converter for statsd_exporter mappings config
Go
3
star
42

neckbeard

JSON log munger and forwarder for logstash
2
star
43

nginx-php-grpc

PHP gRPC server using only nginx
PHP
2
star
44

fog-fifo

Initial Fifo driver for fog
Ruby
2
star
45

single-box-kubernetes

Simple, single box Kubernetes install on CoreOS using Vagrant
Shell
2
star
46

twirpzap

Structured logs for twirp servers using zap
Go
2
star
47

grpc-the-hard-way

Go gRPC clients and servers written without using https://github.com/grpc/grpc-go
Go
2
star
48

configmap-aggregator

Aggregate multiple Kubernetes configmaps into a single one.
Go
2
star
49

blog

Brian's blog
Ruby
1
star
50

go-real-ip

Go net/http middleware inspired by nginx real_ip
Go
1
star
51

kubernetes-goodies-list

List of Interesting Kubernetes addons, etc
1
star
52

protoc-gen-grpc-transcode

Protoc plugin to generate json->grpc transcoder for Go
Go
1
star
53

simple-process-manager

Experimental simple process manager
Go
1
star
54

openresty-docker

Shell
1
star
55

ansible-runit-module

simple runit module for ansible
Python
1
star
56

cookbook-riak-cluster

Simple cookbook for creating riak clusters
Ruby
1
star
57

grpc-fastcgi-example

Example application for use with grpc-fastcgi-proxy
PHP
1
star
58

tcp2stdout

Copy from TCP socket to STDOUT
Go
1
star
59

dial-dns-cache

DNS caching Dialer for Go
Go
1
star
60

main-combiner

Combine multiple Go main packages into a single binary
Go
1
star
61

etcd-helper

Simple etcd cluster helper
Go
1
star
62

coreos-kubernetes-ansible

WIP for Ansible+CoreOS+Vagrant+Kubernetes
1
star
63

kubernetes-example-controller

Example Kubernetes controller for a custom resource
Go
1
star
64

go-metrics-map

Simple map-based store for go-metrics
Go
1
star
65

omnibus-couchdb

very simple omnibus for couch
Ruby
1
star
66

berkshelf-gitpattern

Simple helper for berkshelf to add locations with template
Ruby
1
star
67

lua-resty-statsd

Statsd client for OpenResty
Lua
1
star
68

static-pod-validate

CLI utility for validating Kubernetes static pod manifests
Go
1
star
69

k8s-pod-deleter

Delete Kubernetes pods that have failed
Go
1
star
70

pubsub-push-middleware

PubSub Push helper for Go HTTP servers
1
star
71

net-http-recover

Simple Go net/http recovery middleware
Go
1
star
72

kubernetes-grafana-updater

Update grafana dasboards and datasources from data in Kubernetes
Go
1
star
73

octwirp

OpenCensus instrumentation for twirp services in Go
Go
1
star
74

grpc-client-transcode

Experiment with client side gRPC to HTTP/1.1 transcoding.
Go
1
star
75

pubsuboutput

GCP PubSub output for Elastic Filebeat
Go
1
star
76

simple-queue

Silly, simple local queue written in go. Inspired by beanstalkd using sqlite
Go
1
star
77

sops-wrap

Convenience wrapper for Mozilla sops
Go
1
star