homelab
Monorepo for my personal homelab. It contains applications and kubernetes manifests for deployment.
- Getting started
- Project structure
- Third party applications
- Prometheus exporters
- Other tools
- User interfaces
- External services
- Cluster upgrades
- Node maintenance
- Managed infrastructure
- Environment
Getting started
This assumes you have the following tools:
To start working:
- Clone the repository
- Install golang-based tools using
make install-tools
- Run
make
to build all binaries
Project structure
cmd
- Entry points to any bespoke applications.hack
- Node host specific config files and tweaks.internal
- Packages used throughout the application code.manifests
- Kubernetes manifests to run all my homelab applications.scripts
- Bash scripts for working within the repository.terraform
- Terraform files for managing infrastructure.vendor
- Vendored third-party code.
Third party applications
Here's a list of third-party applications I'm using alongside my custom applications:
- longhorn - Cloud native distributed block storage for Kubernetes.
- home-assistant - Open source home automation that puts local control and privacy first.
- pihole - A black hole for Internet advertisements.
- traefik - The Cloud Native Application Proxy.
- prometheus - The Prometheus monitoring system and time series database.
- grafana - The open observability platform.
- jaeger - Open source, end-to-end distributed tracing.
- node-exporter - Exporter for machine metrics.
- minio - High Performance, Kubernetes Native Object Storage.
- postgres - The world's most advanced open source database.
- firefly - A free and open source personal finances manager.
- photoprism - Personal Photo Management powered by Go and Google TensorFlow.
- cert-manager - x509 certificate management for Kubernetes.
- docker-registry - A stateless, highly scalable server side application that stores and lets you distribute Docker images
- fluent-bit - Log processor and forwarder
Prometheus exporters
I've implemented several custom prometheus exporters in this repo that power my dashboards, these are:
coronavirus
- Exports UK coronavirus stats as prometheus metricshomehub
- Exports statistics from my BT HomeHub as prometheus metricspihole
- Exports statistics from my pihole as prometheus metricsspeedtest
- Exports speedtest results as prometheus metricsweather
- Exports current weather data as prometheus metricsworldping
- Exports world ping times for the local host as prometheus metricshome-assistant
- Proxies prometheus metrics from a home-assistant server.synology
- Exports statistics from my NAS drive.minecraft
- Exports statistics for my Minecraft server.
Other tools
Here are other tools I've implemented for use in the cluster.
bucket-object-cleaner
- Deletes objects in a blob bucket older than a configured age.grafana-backup
- Copies all dashboards and data sources from grafana and writes them to a MinIO bucket.- db-backup - A backup utility for databases.
ftp-backup
- Copies all files from a specified path of an FTP server and writes them to a MinIO bucket.
User interfaces
This repo contains a few homemade user interfaces for navigation/overview of the applications running in the cluster.
directory
- A simple YAML configured link page to access different services in the homelab.
health-dashboard
- A simple UI that returns the health check status of custom services using thepkg.dsb.dev
flavoured health checks.
External services
These are devices/services that the cluster interacts with, without being directly installed in the cluster.
- Ring - Home security devices, connected via home-assistant
- Tailscale VPN - Used to access the cluster from anywhere
- Synology NAS - Used as the storage backend for minio, primarily used for volume backups
- Phillips Hue - Smart lighting, connected via home-assistant
- Cloudflare - DNS, used to access my applications under the
*.homelab.dsb.dev
domain. - Sentry - Cloud-based error monitoring.
Cluster upgrades
Upgrading the k3s cluster itself is managed using Rancher's system-upgrade-controller.
It automates upgrading the cluster through the use of a CRD. To perform a cluster upgrade, see the plans
directory. Each upgrade is stored in its own directory named using the desired version, when the plan manifests get applied
via kustomize jobs will be started by the controller that upgrade the master node, followed by the worker nodes. The upgrade only takes
a few minutes and tools like k9s
and kubectl
will not be able to communicate with the cluster for a small amount of time while
the master node upgrades.
Node maintenance
The hack diretory at the root of the repository contains files used on all nodes in the cluster.
Crontab
The crontab
file describes scheduled tasks that clear out temporary and old files on the filesystem
(/tmp, /var/log etc) and performs package upgrades on a weekly basis. It will also prune container images that are no
longer in use.
The crontab file can be deployed to all nodes using the make install-cron-jobs
recipe. This command will copy over the
contents of the local crontab file to each node via SSH. You need to have used ssh copy-key-id
for each node, so you
don't get any password prompts.
K3s services
The k3s.service
and k3s-agent.service
files are the systemd
service files that run the service and agent nodes. It
sets the cluster to communicate via the Tailscale network and stops k3s from installing traefik. This is because I run
traefik 2, whereas k3s comes with 1.7 by default.
Overclocking
The usercfg.txt
file is stored at /boot/firmware/usercfg.txt
and is used to set overclocking values for the Raspberry
Pis. Pretty certain this voids my warranty, so if you're not me and planning on using this repository you should keep that
in mind.
See Overclocking options in config.txt for more details on these values.
Multipath
The multipath.conf
file is the configuration file for the multipath daemon. It is used to overwrite the built-in
configuration table of multipathd
. Any line whose first non-white-space character is a '#' is considered a comment line.
Empty lines are ignored.
The sole reason for this existing, is to handle an issue with longhorn that I was experiencing.
Managed infrastructure
Some aspects of the homelab are managed using Terraform. These include DNS records via CloudFlare. To plan and apply
changes, use the Makefile
in the terraform directory. The make plan
and make apply
recipes will
perform changes.
The terraform state is included in this repository. It is encrypted using strongbox, which is installed when using
make install-tools
.
Terraform Providers
This list contains all terraform providers used in the project:
Database provisioning
New postgres databases can be provisioned using a kubernetes Job
resource using the createdb
binary included in standard
postgres
docker images. Below is an example:
apiVersion: batch/v1
kind: Job
metadata:
name: example-db-init
spec:
template:
spec:
containers:
- image: postgres:13.1-alpine
name: createdb
command:
- createdb
env:
- name: PGHOST
value: postgres.storage.svc.cluster.local
- name: PGDATABASE
value: example
- name: PGUSER
valueFrom:
secretKeyRef:
key: postgres.user
name: postgres
- name: PGPASSWORD
valueFrom:
secretKeyRef:
key: postgres.password
name: postgres
restartPolicy: Never
backoffLimit: 0
You can view the documentation for the createdb
command here.
Docker images
The cluster contains a deployment of the docker registry that is used as a pull-through proxy for any images hosted
on hub.docker.com. When referencing images stored in the main library, like postgres
or busybox
, you use the image
reference registry.homelab.dsb.dev/library
. Otherwise, you just use the repository/tag combination. This increases
the speed at which images are pulled and also helps with docker's recent change to add API request limits.
Environment
- 4 Raspberry Pi 4b (8GB RAM)
- Kubernetes via k3s
- Zebra Bramble Cluster Case from C4 Labs
- 4 SanDisk Ultra 32 GB microSDHC Memory Cards
- 4 SanDisk Ultra Fit 128 GB USB 3.1 Flash Drive USB Drives
- Synology DS115j NAS drive.