• This repository has been archived on 18/Jun/2022
  • Stars
    star
    1,308
  • Rank 34,521 (Top 0.8 %)
  • Language
    Go
  • License
    Apache License 2.0
  • Created over 9 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A Docker volume plugin, managing persistent container volumes.

Convoy Build Status

Overview

Convoy is a Docker volume plugin for a variety of storage back-ends. It supports vendor-specific extensions like snapshots, backups, and restores. It's written in Go and can be deployed as a standalone binary.

Convoy_DEMO

Why use Convoy?

Convoy makes it easy to manage your data in Docker. It provides persistent volumes for Docker containers with support for snapshots, backups, and restores on various back-ends (e.g. device mapper, NFS, EBS).

For example, you can:

  • Migrate volumes between hosts
  • Share the same volumes across hosts
  • Schedule periodic snapshots of volumes
  • Recover a volume from a previous backup

Supported back-ends

  • Device Mapper
  • Virtual File System (VFS) / Network File System (NFS)
  • Amazon Elastic Block Store (EBS)

Quick Start Guide

First, make sure Docker 1.8 or above is running.

docker --version

If not, install the latest Docker daemon as follows:

curl -sSL https://get.docker.com/ | sh

Once the right Docker daemon version is running, install and configure the Convoy volume plugin as follows:

wget https://github.com/rancher/convoy/releases/download/v0.5.2/convoy.tar.gz
tar xvzf convoy.tar.gz
sudo cp convoy/convoy convoy/convoy-pdata_tools /usr/local/bin/
sudo mkdir -p /etc/docker/plugins/
sudo bash -c 'echo "unix:///var/run/convoy/convoy.sock" > /etc/docker/plugins/convoy.spec'

You can use a file-backed loopback device to test and demo Convoy Device Mapper driver. A loopback device, however, is known to be unstable and should not be used in production.

truncate -s 100G data.vol
truncate -s 1G metadata.vol
sudo losetup /dev/loop5 data.vol
sudo losetup /dev/loop6 metadata.vol

Once the data and metadata devices are set up, you can start the Convoy plugin daemon as follows:

sudo convoy daemon --drivers devicemapper --driver-opts dm.datadev=/dev/loop5 --driver-opts dm.metadatadev=/dev/loop6

You can create a Docker container with a convoy volume. As a test, create a file called /vol1/foo in the Convoy volume:

sudo docker run -v vol1:/vol1 --volume-driver=convoy ubuntu touch /vol1/foo

Next, take a snapshot of the convoy volume and backup the snapshot to a local directory: (You can also make backups to an NFS share or S3 object store.)

sudo convoy snapshot create vol1 --name snap1vol1
sudo mkdir -p /opt/convoy/
sudo convoy backup create snap1vol1 --dest vfs:///opt/convoy/

The convoy backup command returns a URL string representing the backup dataset. You can use this URL to recover the volume on another host:

sudo convoy create res1 --backup <backup_url>

The following command creates a new container and mounts the recovered Convoy volume into that container:

sudo docker run -v res1:/res1 --volume-driver=convoy ubuntu ls /res1/foo

You should see the recovered file in /res1/foo.

Installation

Ensure you have Docker 1.8 or above installed.

Download the latest version of Convoy and unzip it. Put the binaries in a directory in the execution $PATH of sudo and root users (e.g. /usr/local/bin).

wget https://github.com/rancher/convoy/releases/download/v0.5.2/convoy.tar.gz
tar xvzf convoy.tar.gz
sudo cp convoy/convoy convoy/convoy-pdata_tools /usr/local/bin/

Run the following commands to set up the Convoy volume plugin for Docker:

sudo mkdir -p /etc/docker/plugins/
sudo bash -c 'echo "unix:///var/run/convoy/convoy.sock" > /etc/docker/plugins/convoy.spec'

Start Convoy Daemon

You need to pass different arguments to the Convoy daemon depending on your choice of back-end implementation.

Device Mapper

If you're running in a production environment with the Device Mapper driver, it's recommended to attach a new, empty block device to the host Convoy is running on. Then you can make two partitions on the device using dm_dev_partition.sh to get two block devices ready for the Device Mapper driver. See Device Mapper Partition Helper for more details.

Device Mapper requires two block devices to create storage pool for all volumes and snapshots. Assuming you have two devices created one data device called /dev/convoy-vg/data and the other metadata device called /dev/convoy-vg/metadata, then run the following command to start the Convoy daemon:

sudo convoy daemon --drivers devicemapper --driver-opts dm.datadev=/dev/convoy-vg/data --driver-opts dm.metadatadev=/dev/convoy-vg/metadata
  • The default Device Mapper volume size is 100G. You can override it with the ---driver-opts dm.defaultvolumesize option.
  • You can take a look here if you want to know how much storage should be allocated for the metadata device.

NFS

First, mount the NFS share to the root directory used to store volumes. Substitute <vfs_path> with the appropriate directory of your choice:

sudo mkdir <vfs_path>
sudo mount -t nfs <nfs_server>:/path <vfs_path>

The NFS-based Convoy daemon can be started as follows:

sudo convoy daemon --drivers vfs --driver-opts vfs.path=<vfs_path>

EBS

Make sure you're running on an EC2 instance and have already configured AWS credentials correctly.

sudo convoy daemon --drivers ebs

DigitalOcean

Make sure you're running on a DigitalOcean Droplet and that you have the DO_TOKEN environment variable set with your key.

sudo convoy daemon --drivers digitalocean

Volume Commands

Create a Volume

Volumes can be created using the convoy create command:

sudo convoy create volume_name
  • Device Mapper: Default volume size is 100G. --size option is supported.
  • EBS: Default volume size is 4G. --size and some other options are supported.

You can also create a volume using the docker run command. If the volume does not yet exist, a new volume will be created. Otherwise the existing volume will be used.

sudo docker run -it -v test_volume:/test --volume-driver=convoy ubuntu

Delete a Volume

sudo convoy delete <volume_name>

or

sudo docker rm -v <container_name>
  • NFS, EBS and DigitalOcean: The -r/--reference option instructs the convoy delete command to only delete the reference to the volume from the current host and leave the underlying files on NFS server or EBS volume unchanged. This is useful when the volume need to be reused later.
  • docker rm -v would be treated as convoy delete with -r/--reference.
  • If you use --rm with docker run, all Docker volumes associated with the container would be deleted on container exit with convoy delete --reference. See Docker run reference for details.

List and Inspect a Volume

sudo convoy list
sudo convoy inspect vol1

Take Snapshot of a Volume

sudo convoy snapshot create vol1 --name snap1vol1

Delete a Snapshot

sudo convoy snapshot delete snap1vol1
  • Device Mapper: please make sure you keep the latest backed-up snapshot for the same volume available to enable the incremental backup mechanism. Convoy needs it to calculate the differences between snapshots.

Backup a Snapshot

  • Device Mapper or VFS: You can backup a snapshot to an NFS mount/local directory or an S3 object store:
sudo convoy backup create snap1vol1 --dest vfs:///opt/backup/

or

sudo convoy backup create snap1vol1 --dest s3://backup-bucket@us-west-2/

or if you want to use a custom S3 endpoint (like Minio)

sudo convoy backup --s3-endpoint http://s3.example.com:9000/ create snap1vol1 --dest s3://backup-bucket@us-west-2/

The backup operation returns a URL string that uniquely identifies the backup dataset.

s3://backup-bucket@us-west-2/?backup=f98f9ea1-dd6e-4490-8212-6d50df1982ea\u0026volume=e0d386c5-6a24-446c-8111-1077d10356b0

If you're using S3, please make sure you have AWS credentials ready either in ~/.aws/credentials or as environment variables, as described here. You may need to put credentials in /root/.aws/credentials or set up sudo environment variables in order to get S3 credentials to work.

  • EBS: --dest is not needed. Just do convoy backup create snap1vol1.

Restore a Volume from Backup

sudo convoy create res1 --backup <url>
  • EBS: Current host must be in the same region of the backup to be restored.

Mount a Restored Volume into a Docker Container

You can use the standard docker run command to mount the restored volume into a Docker container:

sudo docker run -it -v res1:/res1 --volume-driver convoy ubuntu

Mount an NFS-Backed Volume on Multiple Servers

You can mount an NFS-backed volume on multiple servers. You can use the standard docker run command to mount an existing NFS-backed mount into a Docker container. For example, if you have already created an NFS-based volume vol1 on one host, you can run the following command to mount the existing vol1 volume into a new container:

sudo docker run -it -v vol1:/vol1 --volume-driver=convoy ubuntu

Support and Discussion

If you need any help with Convoy, please join us at either our forum or #rancher IRC channel.

Feel free to submit any bugs, issues, and feature requests to Convoy Issues.

Contribution

Contribution are welcome! Please take a look at Development Guide if you want to how to build Convoy from source or running test cases.

We love to hear new Convoy Driver ideas from you. Implementations are most welcome! Please consider take a look at enhancement ideas if you want contribute.

And of course, bug fixes are always welcome!

References

Convoy Command Line Reference

Using Convoy with Docker

Driver Specific

Device Mapper

Amazon Elastic Block Store

Virtual File System/Network File System

More Repositories

1

rancher

Complete container management platform
Go
22,538
star
2

os

Tiny Linux distro that runs the entire OS as Docker containers
Go
6,437
star
3

k3os

Purpose-built OS for Kubernetes, fully managed by Kubernetes.
Go
3,403
star
4

rke

Rancher Kubernetes Engine (RKE), an extremely simple, lightning fast Kubernetes distribution that runs entirely within containers.
Go
3,138
star
5

rio

Application Deployment Engine for Kubernetes
Go
2,282
star
6

local-path-provisioner

Dynamically provisioning persistent local storage with Kubernetes
Go
1,938
star
7

fleet

Deploy workloads from Git to large fleets of Kubernetes clusters
Go
1,450
star
8

rke2

Go
1,028
star
9

old-vm

(OBSOLETE) Package and Run Virtual Machines as Docker Containers
Go
646
star
10

ui

Rancher UI
JavaScript
587
star
11

cattle

Infrastructure orchestration engine for Rancher 1.x
Java
574
star
12

k3c

Lightweight local container engine for container development
Go
571
star
13

system-upgrade-controller

In your Kubernetes, upgrading your nodes
Go
502
star
14

dashboard

The Rancher UI
Vue
410
star
15

community-catalog

Catalog entries contributed by the community
Smarty
384
star
16

charts

Github based Helm Chart Index Repository providing charts crafted for Rancher Manager
Smarty
381
star
17

install-docker

Scripts for docker-machine to install a particular docker version
Shell
361
star
18

dapper

Docker build wrapper
Go
358
star
19

quickstart

HCL
357
star
20

cli

Rancher CLI
Go
331
star
21

terraform-provider-rke

Terraform provider plugin for deploy kubernetes cluster by RKE(Rancher Kubernetes Engine)
Go
328
star
22

opni

Multi Cluster Observability with AIOps
Go
323
star
23

kim

In ur kubernetes, buildin ur imagez
Go
323
star
24

trash

Minimalistic Go vendored code manager
Go
296
star
25

terraform-controller

Use K8s to Run Terraform
Go
290
star
26

remotedialer

HTTP in TCP in Websockets in HTTP in TCP, Tunnel all the things!
Go
255
star
27

elemental-toolkit

❄️ The toolkit to build, ship and maintain cloud-init driven Linux derivatives based on container images
Go
251
star
28

elemental

Elemental is an immutable Linux distribution built to run Rancher and its corresponding Kubernetes distributions RKE2 and k3s. It is built using the Elemental-toolkit
Go
228
star
29

terraform-provider-rancher2

Terraform Rancher2 provider
Go
222
star
30

rancher-compose

Docker compose compatible client to deploy to Rancher
Go
214
star
31

wrangler

Write controllers like a boss
Go
205
star
32

os-vagrant

Ruby
176
star
33

rancher-catalog

Smarty
155
star
34

docs

Documentation for Rancher products (for 2.0/new site)
SCSS
140
star
35

fleet-examples

Fleet usage examples
Shell
140
star
36

catalog-dockerfiles

Dockerfiles for Rancher Catalog containers
Shell
131
star
37

rancher-cleanup

Shell
125
star
38

api-spec

Specification for Rancher REST API implementation
121
star
39

k8s-intro-training

HTML
114
star
40

ansible-playbooks

Rancher 1.6 Installation. Doesn't support Rancher 2.0
Python
113
star
41

sherdock

Docker Image Manager
JavaScript
110
star
42

norman

APIs on APIs on APIs
Go
108
star
43

docker-from-scratch

Tiny Docker in Docker
Go
105
star
44

lb-controller

Load Balancer for Rancher services via ingress controllers backed up by a Load Balancer provider of choice
Go
97
star
45

pipeline

Go
96
star
46

k3k

Kubernetes in Kubernetes
Go
89
star
47

container-crontab

Simple cron runner for containers
Go
88
star
48

backup-restore-operator

Go
88
star
49

terraform-modules

Rancher Terraform Modules
HCL
85
star
50

os2

EXPERIMENTAL: A Rancher and Kubernetes optimized immutable Linux distribution based on openSUSE
Go
82
star
51

system-charts

Mustache
82
star
52

vagrant

Vagrant file to stand up a Local Rancher install with 3 nodes
Shell
79
star
53

rancher-dns

A simple DNS server that returns different answers depending on the IP address of the client making the request
Go
79
star
54

giddyup

Go
78
star
55

kontainer-engine

Provisioning kubernetes cluster at ease
Go
78
star
56

go-rancher

Go language bindings for Rancher API
Go
74
star
57

go-skel

Skeleton for Rancher Go Microservices
Shell
71
star
58

runc-cve

CVE patches for legacy runc packaged with Docker
Dockerfile
69
star
59

terraform-k3s-aws-cluster

HCL
67
star
60

agent

Shell
64
star
61

external-dns

Service updating external DNS with Rancher services records for Rancher 1.6
Go
63
star
62

terraform-provider-rancher2-archive

[Deprecated] Use https://github.com/terraform-providers/terraform-provider-rancher2
Go
62
star
63

kontainer-driver-metadata

This repository is to keep information of k8s versions and their dependencies like k8s components flags and system addons images.
Go
62
star
64

gitjob

Go
59
star
65

types

Rancher API types
Go
59
star
66

rancher.github.io

HTML
58
star
67

ui-driver-skel

Skeleton Rancher UI driver for custom docker-machine drivers
JavaScript
58
star
68

rancher-docs

Rancher Documentation
JavaScript
57
star
69

rke2-charts

Shell
56
star
70

os-services

RancherOS Service Compose Templates
Shell
54
star
71

client-python

A Python client for Rancher APIs
Python
49
star
72

hyperkube

Rancher hyperkube images
44
star
73

rancher-cloud-controller-manager

A kubernetes cloud-controller-manager for the rancher cloud
Go
44
star
74

steve

Kubernetes API Translator
Go
43
star
75

rodeo

Smarty
43
star
76

cis-operator

Go
43
star
77

rancherd

Bootstrap Rancher and k3s/rke2
Go
42
star
78

partner-charts

A catalog based on applications from independent software vendors (ISVs). Most of them are SUSE Partners.
Smarty
42
star
79

10acre-ranch

Build Rancher environment on GCE
Shell
41
star
80

secrets-bridge

Go
40
star
81

terraform-rancher-server

HCL
39
star
82

storage

Rancher specific storage plugins
Shell
39
star
83

k8s-sql

Storage backend for Kubernetes using Go database/sql
Go
37
star
84

lasso

Low level generic controller framework
Go
36
star
85

server-chart

[Deprecated] Helm chart for Rancher server
Shell
36
star
86

os-packer

Shell
36
star
87

pipeline-example-go

Go
36
star
88

cluster-template-examples

35
star
89

system-tools

This repo is for tools helping with various cleanup tasks for rancher projects. Example: rancher installation cleanup
Go
35
star
90

elemental-operator

The Elemental operator is responsible for managing the OS versions and maintaining a machine inventory to assist with edge or baremetal installations.
Go
33
star
91

image-mirror

Shell
31
star
92

rancher-metadata

A simple HTTP server that returns EC2-style metadata information that varies depending on the source IP address making the request.
Go
31
star
93

os-base

Base file system for RancherOS images
Shell
31
star
94

websocket-proxy

Go
29
star
95

rke-tools

Tools container for supporting functions in RKE
Go
29
star
96

gdapi-python

Python Binding to API spec
Python
28
star
97

wins

Windows containers connect to Windows host
Go
28
star
98

api-ui

Embedded UI for any service that implements the Rancher API spec
JavaScript
27
star
99

turtles

Rancher CAPI extension
Go
27
star
100

migration-tools

Go
27
star