• Stars
    star
    2,138
  • Rank 21,580 (Top 0.5 %)
  • Language
    Go
  • License
    Apache License 2.0
  • Created about 6 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Dynamically provisioning persistent local storage with Kubernetes

Local Path Provisioner

Build StatusGo Report Card

Overview

Local Path Provisioner provides a way for the Kubernetes users to utilize the local storage in each node. Based on the user configuration, the Local Path Provisioner will create either hostPath or local based persistent volume on the node automatically. It utilizes the features introduced by Kubernetes Local Persistent Volume feature, but makes it a simpler solution than the built-in local volume feature in Kubernetes.

Compare to built-in Local Persistent Volume feature in Kubernetes

Pros

Dynamic provisioning the volume using hostPath or local.

Cons

  1. No support for the volume capacity limit currently.
    1. The capacity limit will be ignored for now.

Requirement

Kubernetes v1.12+.

Deployment

Installation

In this setup, the directory /opt/local-path-provisioner will be used across all the nodes as the path for provisioning (a.k.a, store the persistent volume data). The provisioner will be installed in local-path-storage namespace by default.

  • Stable
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.24/deploy/local-path-storage.yaml
  • Development
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

Or, use kustomize to deploy.

  • Stable
kustomize build "github.com/rancher/local-path-provisioner/deploy?ref=v0.0.24" | kubectl apply -f -
  • Development
kustomize build "github.com/rancher/local-path-provisioner/deploy?ref=master" | kubectl apply -f -

After installation, you should see something like the following:

$ kubectl -n local-path-storage get pod
NAME                                     READY     STATUS    RESTARTS   AGE
local-path-provisioner-d744ccf98-xfcbk   1/1       Running   0          7m

Check and follow the provisioner log using:

kubectl -n local-path-storage logs -f -l app=local-path-provisioner

Usage

Create a hostPath backend Persistent Volume and a pod uses it:

kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pvc/pvc.yaml
kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml

Or, use kustomize to deploy them.

kustomize build "github.com/rancher/local-path-provisioner/examples/pod?ref=master" | kubectl apply -f -

You should see the PV has been created:

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                    STORAGECLASS   REASON    AGE
pvc-bc3117d9-c6d3-11e8-b36d-7a42907dda78   2Gi        RWO            Delete           Bound     default/local-path-pvc   local-path               4s

The PVC has been bound:

$ kubectl get pvc
NAME             STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
local-path-pvc   Bound     pvc-bc3117d9-c6d3-11e8-b36d-7a42907dda78   2Gi        RWO            local-path     16s

And the Pod started running:

$ kubectl get pod
NAME          READY     STATUS    RESTARTS   AGE
volume-test   1/1       Running   0          3s

Write something into the pod

kubectl exec volume-test -- sh -c "echo local-path-test > /data/test"

Now delete the pod using

kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml

After confirm that the pod is gone, recreated the pod using

kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml

Check the volume content:

$ kubectl exec volume-test -- sh -c "cat /data/test"
local-path-test

Delete the pod and pvc

kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml
kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pvc/pvc.yaml

Or, use kustomize to delete them.

kustomize build "github.com/rancher/local-path-provisioner/examples/pod?ref=master" | kubectl delete -f -

The volume content stored on the node will be automatically cleaned up. You can check the log of local-path-provisioner-xxx for details.

Now you've verified that the provisioner works as expected.

Configuration

Customize the ConfigMap

The configuration of the provisioner is a json file config.json, a Pod template helperPod.yaml and two bash scripts setup and teardown, stored in a config map, e.g.:

kind: ConfigMap
apiVersion: v1
metadata:
  name: local-path-config
  namespace: local-path-storage
data:
  config.json: |-
        {
                "nodePathMap":[
                {
                        "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                        "paths":["/opt/local-path-provisioner"]
                },
                {
                        "node":"yasker-lp-dev1",
                        "paths":["/opt/local-path-provisioner", "/data1"]
                },
                {
                        "node":"yasker-lp-dev3",
                        "paths":[]
                }
                ]
        }
  setup: |-
        #!/bin/sh
        set -eu
        mkdir -m 0777 -p "$VOL_DIR"
  teardown: |-
        #!/bin/sh
        set -eu
        rm -rf "$VOL_DIR"
  helperPod.yaml: |-
        apiVersion: v1
        kind: Pod
        metadata:
          name: helper-pod
        spec:
          containers:
          - name: helper-pod
            image: busybox

config.json

Definition

nodePathMap is the place user can customize where to store the data on each node.

  1. If one node is not listed on the nodePathMap, and Kubernetes wants to create volume on it, the paths specified in DEFAULT_PATH_FOR_NON_LISTED_NODES will be used for provisioning.
  2. If one node is listed on the nodePathMap, the specified paths in paths will be used for provisioning.
    1. If one node is listed but with paths set to [], the provisioner will refuse to provision on this node.
    2. If more than one path was specified, the path would be chosen randomly when provisioning.

sharedFileSystemPath allows the provisioner to use a filesystem that is mounted on all nodes at the same time. In this case all access modes are supported: ReadWriteOnce, ReadOnlyMany and ReadWriteMany for storage claims.

In addition volumeBindingMode: Immediate can be used in StorageClass definition.

Please note that nodePathMap and sharedFileSystemPath are mutually exclusive. If sharedFileSystemPath is used, then nodePathMap must be set to [].

Rules

The configuration must obey following rules:

  1. config.json must be a valid json file.
  2. A path must start with /, a.k.a an absolute path.
  3. Root directory(/) is prohibited.
  4. No duplicate paths allowed for one node.
  5. No duplicate node allowed.

Scripts setup and teardown and the helperPod.yaml template

  • The setup script is run before the volume is created, to prepare the volume directory on the node.
  • The teardown script is run after the volume is deleted, to cleanup the volume directory on the node.
  • The helperPod.yaml template is used to create a helper Pod that runs the setup or teardown script.

The scripts receive their input as environment variables:

Environment variable Description
VOL_DIR Volume directory that should be created or removed.
VOL_MODE The PersistentVolume mode (Block or Filesystem).
VOL_SIZE_BYTES Requested volume size in bytes.

Reloading

The provisioner supports automatic configuration reloading. Users can change the configuration using kubectl apply or kubectl edit with config map local-path-config. There is a delay between when the user updates the config map and the provisioner picking it up.

When the provisioner detects the configuration changes, it will try to load the new configuration. Users can observe it in the log

time="2018-10-03T05:56:13Z" level=debug msg="Applied config: {"nodePathMap":[{"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES","paths":["/opt/local-path-provisioner"]},{"node":"yasker-lp-dev1","paths":["/opt","/data1"]},{"node":"yasker-lp-dev3"}]}"

If the reload fails, the provisioner will log the error and continue using the last valid configuration for provisioning in the meantime.

time="2018-10-03T05:19:25Z" level=error msg="failed to load the new config file: fail to load config file /etc/config/config.json: invalid character '#' looking for beginning of object key string"

time="2018-10-03T05:20:10Z" level=error msg="failed to load the new config file: config canonicalization failed: path must start with / for path opt on node yasker-lp-dev1"

time="2018-10-03T05:23:35Z" level=error msg="failed to load the new config file: config canonicalization failed: duplicate path /data1 on node yasker-lp-dev1

time="2018-10-03T06:39:28Z" level=error msg="failed to load the new config file: config canonicalization failed: duplicate node yasker-lp-dev3"

Volume Types

To specify the type of volume you want the provisioner to create, add either of the following annotations;

  • PVC:
annotations:
  volumeType: <local or hostPath>
  • StorageClass:
annotations:
  defaultVolumeType: <local or hostPath>

A few things to note; the annotation for the StorageClass will apply to all volumes using it and is superseded by the annotation on the PVC if one is provided. If neither of the annotations was provided then we default to hostPath.

Storage classes

If more than one paths are specified in the nodePathMap the path is chosen randomly. To make the provisioner choose a specific path, use a storageClass defined with a parameter called nodePath. Note that this path should be defined in the nodePathMap

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ssd-local-path
provisioner: cluster.local/local-path-provisioner
parameters:
  nodePath: /data/ssd
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete

Here the provisioner will use the path /data/ssd when storage class ssd-local-path is used.

Uninstall

Before uninstallation, make sure the PVs created by the provisioner have already been deleted. Use kubectl get pv and make sure no PV with StorageClass local-path.

To uninstall, execute:

  • Stable
kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.24/deploy/local-path-storage.yaml
  • Development
kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

Debug

it providers a out-of-cluster debug env for developers

debug

git clone https://github.com/rancher/local-path-provisioner.git
cd local-path-provisioner
go build
kubectl apply -f debug/config.yaml
./local-path-provisioner --debug start --service-account-name=default

example

Usage

clear

kubectl delete -f debug/config.yaml

License

Copyright (c) 2014-2020 Rancher Labs, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

More Repositories

1

rancher

Complete container management platform
Go
23,193
star
2

os

Tiny Linux distro that runs the entire OS as Docker containers
Go
6,437
star
3

k3os

Purpose-built OS for Kubernetes, fully managed by Kubernetes.
Go
3,403
star
4

rke

Rancher Kubernetes Engine (RKE), an extremely simple, lightning fast Kubernetes distribution that runs entirely within containers.
Go
3,197
star
5

rio

Application Deployment Engine for Kubernetes
Go
2,282
star
6

fleet

Deploy workloads from Git to large fleets of Kubernetes clusters
Go
1,514
star
7

convoy

A Docker volume plugin, managing persistent container volumes.
Go
1,308
star
8

rke2

Go
1,028
star
9

old-vm

(OBSOLETE) Package and Run Virtual Machines as Docker Containers
Go
646
star
10

ui

Rancher UI
JavaScript
598
star
11

cattle

Infrastructure orchestration engine for Rancher 1.x
Java
574
star
12

k3c

Lightweight local container engine for container development
Go
571
star
13

system-upgrade-controller

In your Kubernetes, upgrading your nodes
Go
502
star
14

dashboard

The Rancher UI
Vue
449
star
15

charts

Github based Helm Chart Index Repository providing charts crafted for Rancher Manager
Smarty
389
star
16

community-catalog

Catalog entries contributed by the community
Smarty
384
star
17

install-docker

Scripts for docker-machine to install a particular docker version
Shell
370
star
18

dapper

Docker build wrapper
Go
358
star
19

quickstart

HCL
357
star
20

terraform-provider-rke

Terraform provider plugin for deploy kubernetes cluster by RKE(Rancher Kubernetes Engine)
Go
340
star
21

opni

Multi Cluster Observability with AIOps
Go
334
star
22

cli

Rancher CLI
Go
331
star
23

kim

In ur kubernetes, buildin ur imagez
Go
326
star
24

trash

Minimalistic Go vendored code manager
Go
296
star
25

elemental

Elemental is an immutable Linux distribution built to run Rancher and its corresponding Kubernetes distributions RKE2 and k3s. It is built using the Elemental-toolkit
Go
295
star
26

terraform-controller

Use K8s to Run Terraform
Go
292
star
27

elemental-toolkit

❄️ The toolkit to build, ship and maintain cloud-init driven Linux derivatives based on container images
Go
272
star
28

remotedialer

HTTP in TCP in Websockets in HTTP in TCP, Tunnel all the things!
Go
255
star
29

terraform-provider-rancher2

Terraform Rancher2 provider
Go
222
star
30

rancher-compose

Docker compose compatible client to deploy to Rancher
Go
214
star
31

wrangler

Write controllers like a boss
Go
205
star
32

os-vagrant

Ruby
176
star
33

k3k

Kubernetes in Kubernetes
Go
163
star
34

rancher-cleanup

Shell
160
star
35

rancher-catalog

Smarty
155
star
36

docs

Documentation for Rancher products (for 2.0/new site)
SCSS
140
star
37

fleet-examples

Fleet usage examples
Shell
140
star
38

catalog-dockerfiles

Dockerfiles for Rancher Catalog containers
Shell
131
star
39

api-spec

Specification for Rancher REST API implementation
121
star
40

k8s-intro-training

HTML
114
star
41

ansible-playbooks

Rancher 1.6 Installation. Doesn't support Rancher 2.0
Python
113
star
42

sherdock

Docker Image Manager
JavaScript
110
star
43

norman

APIs on APIs on APIs
Go
108
star
44

docker-from-scratch

Tiny Docker in Docker
Go
105
star
45

backup-restore-operator

Go
99
star
46

lb-controller

Load Balancer for Rancher services via ingress controllers backed up by a Load Balancer provider of choice
Go
97
star
47

pipeline

Go
96
star
48

container-crontab

Simple cron runner for containers
Go
88
star
49

terraform-modules

Rancher Terraform Modules
HCL
85
star
50

system-charts

Replaced by rancher/charts. The deprecation process is in progress.
Mustache
84
star
51

os2

EXPERIMENTAL: A Rancher and Kubernetes optimized immutable Linux distribution based on openSUSE
Go
82
star
52

cluster-api-provider-rke2

RKE2 bootstrap and control-plane Cluster API providers.
Go
81
star
53

vagrant

Vagrant file to stand up a Local Rancher install with 3 nodes
Shell
79
star
54

rancher-dns

A simple DNS server that returns different answers depending on the IP address of the client making the request
Go
79
star
55

giddyup

Go
78
star
56

kontainer-engine

Provisioning kubernetes cluster at ease
Go
78
star
57

go-rancher

Go language bindings for Rancher API
Go
74
star
58

go-skel

Skeleton for Rancher Go Microservices
Shell
71
star
59

runc-cve

CVE patches for legacy runc packaged with Docker
Dockerfile
69
star
60

terraform-k3s-aws-cluster

HCL
67
star
61

agent

Shell
64
star
62

kontainer-driver-metadata

This repository is to keep information of k8s versions and their dependencies like k8s components flags and system addons images.
Go
63
star
63

external-dns

Service updating external DNS with Rancher services records for Rancher 1.6
Go
63
star
64

terraform-provider-rancher2-archive

[Deprecated] Use https://github.com/terraform-providers/terraform-provider-rancher2
Go
62
star
65

gitjob

Go
59
star
66

types

Rancher API types
Go
59
star
67

rancher-docs

Rancher Documentation
JavaScript
58
star
68

rancher.github.io

HTML
58
star
69

ui-driver-skel

Skeleton Rancher UI driver for custom docker-machine drivers
JavaScript
58
star
70

rke2-charts

Shell
56
star
71

os-services

RancherOS Service Compose Templates
Shell
54
star
72

turtles

Rancher CAPI extension
Go
50
star
73

client-python

A Python client for Rancher APIs
Python
49
star
74

hyperkube

Rancher hyperkube images
48
star
75

partner-charts

A catalog based on applications from independent software vendors (ISVs). Most of them are SUSE Partners.
Smarty
47
star
76

rancher-cloud-controller-manager

A kubernetes cloud-controller-manager for the rancher cloud
Go
44
star
77

steve

Kubernetes API Translator
Go
43
star
78

rodeo

Smarty
43
star
79

cluster-template-examples

43
star
80

cis-operator

Go
43
star
81

rancherd

Bootstrap Rancher and k3s/rke2
Go
42
star
82

10acre-ranch

Build Rancher environment on GCE
Shell
41
star
83

elemental-operator

The Elemental operator is responsible for managing the OS versions and maintaining a machine inventory to assist with edge or baremetal installations.
Go
41
star
84

secrets-bridge

Go
40
star
85

storage

Rancher specific storage plugins
Shell
39
star
86

k8s-sql

Storage backend for Kubernetes using Go database/sql
Go
37
star
87

lasso

Low level generic controller framework
Go
36
star
88

server-chart

[Deprecated] Helm chart for Rancher server
Shell
36
star
89

os-packer

Shell
36
star
90

pipeline-example-go

Go
36
star
91

system-tools

This repo is for tools helping with various cleanup tasks for rancher projects. Example: rancher installation cleanup
Go
35
star
92

rancher-metadata

A simple HTTP server that returns EC2-style metadata information that varies depending on the source IP address making the request.
Go
31
star
93

os-base

Base file system for RancherOS images
Shell
31
star
94

image-mirror

Shell
31
star
95

websocket-proxy

Go
29
star
96

rke-tools

Tools container for supporting functions in RKE
Go
29
star
97

gdapi-python

Python Binding to API spec
Python
28
star
98

wins

Windows containers connect to Windows host
Go
28
star
99

api-ui

Embedded UI for any service that implements the Rancher API spec
JavaScript
27
star
100

migration-tools

Go
27
star