• Stars
    star
    417
  • Rank 99,878 (Top 3 %)
  • Language
    Python
  • License
    MIT License
  • Created over 11 years ago
  • Updated 6 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Deploy Ceph with minimal infrastructure, using just SSH access

ceph-deploy -- Deploy Ceph with minimal infrastructure

ceph-deploy is a way to deploy Ceph relying on just SSH access to the servers, sudo, and some Python. It runs fully on your workstation, requiring no servers, databases, or anything like that.

If you set up and tear down Ceph clusters a lot, and want minimal extra bureaucracy, this is for you.

This README provides a brief overview of ceph-deploy, for thorough documentation please go to https://docs.ceph.com/projects/ceph-deploy/en/latest/

What this tool is not

It is not a generic deployment system, it is only for Ceph, and is designed for users who want to quickly get Ceph running with sensible initial settings without the overhead of installing Chef, Puppet or Juju.

It does not handle client configuration beyond pushing the Ceph config file and users who want fine-control over security settings, partitions or directory locations should use a tool such as Chef or Puppet.

Installation

Depending on what type of usage you are going to have with ceph-deploy you might want to look into the different ways to install it. For automation, you might want to bootstrap directly. Regular users of ceph-deploy would probably install from the OS packages or from the Python Package Index.

Python Package Index

If you are familiar with Python install tools (like pip and easy_install) you can easily install ceph-deploy like:

pip install ceph-deploy

or:

easy_install ceph-deploy

It should grab all the dependencies for you and install into the current user's environment.

We highly recommend using virtualenv and installing dependencies in a contained way.

DEB

All new releases of ceph-deploy are pushed to all ceph DEB release repos.

The DEB release repos are found at:

http://ceph.com/debian-{release}
http://ceph.com/debian-testing

This means, for example, that installing ceph-deploy from http://ceph.com/debian-giant will install the same version as from http://ceph.com/debian-firefly or http://ceph.com/debian-testing.

RPM

All new releases of ceph-deploy are pushed to all ceph RPM release repos.

The RPM release repos are found at:

http://ceph.com/rpm-{release}
http://ceph.com/rpm-testing

Make sure you add the proper one for your distribution (i.e. el7 vs rhel7).

This means, for example, that installing ceph-deploy from http://ceph.com/rpm-giant will install the same version as from http://ceph.com/rpm-firefly or http://ceph.com/rpm-testing.

bootstrapping

To get the source tree ready for use, run this once:

./bootstrap

You can symlink the ceph-deploy script in this somewhere convenient (like ~/bin), or add the current directory to PATH, or just always type the full path to ceph-deploy.

SSH and Remote Connections

ceph-deploy will attempt to connect via SSH to hosts when the hostnames do not match the current host's hostname. For example, if you are connecting to host node1 it will attempt an SSH connection as long as the current host's hostname is not node1.

ceph-deploy at a minimum requires that the machine from which the script is being run can ssh as root without password into each Ceph node.

To enable this generate a new ssh keypair for the root user with no passphrase and place the public key (id_rsa.pub or id_dsa.pub) in:

/root/.ssh/authorized_keys

and ensure that the following lines are in the sshd config:

PermitRootLogin without-password
PubkeyAuthentication yes

The machine running ceph-deploy does not need to have the Ceph packages installed unless it needs to admin the cluster directly using the ceph command line tool.

usernames

When not specified the connection will be done with the same username as the one executing ceph-deploy. This is useful if the same username is shared in all the nodes but can be cumbersome if that is not the case.

A way to avoid this is to define the correct usernames to connect with in the SSH config, but you can also use the --username flag as well:

ceph-deploy --username ceph install node1

ceph-deploy then in turn would use ceph@node1 to connect to that host.

This would be the same expectation for any action that warrants a connection to a remote host.

Managing an existing cluster

You can use ceph-deploy to provision nodes for an existing cluster. To grab a copy of the cluster configuration file (normally ceph.conf):

ceph-deploy config pull HOST

You will usually also want to gather the encryption keys used for that cluster:

ceph-deploy gatherkeys MONHOST

At this point you can skip the steps below that create a new cluster (you already have one) and optionally skip installation and/or monitor creation, depending on what you are trying to accomplish.

Creating a new cluster

Creating a new configuration

To create a new configuration file and secret key, decide what hosts will run ceph-mon, and run:

ceph-deploy new MON [MON..]

listing the hostnames of the monitors. Each MON can be

  • a simple hostname. It must be DNS resolvable without the fully qualified domain name.
  • a fully qualified domain name. The hostname is assumed to be the leading component up to the first ..
  • a HOST:FQDN pair, of both the hostname and a fully qualified domain name or IP address. For example, foo, foo.example.com, foo:something.example.com, and foo:1.2.3.4 are all valid. Note, however, that the hostname should match that configured on the host foo.

The above will create a ceph.conf and ceph.mon.keyring in your current directory.

Edit initial cluster configuration

You want to review the generated ceph.conf file and make sure that the mon_host setting contains the IP addresses you would like the monitors to bind to. These are the IPs that clients will initially contact to authenticate to the cluster, and they need to be reachable both by external client-facing hosts and internal cluster daemons.

Installing packages

To install the Ceph software on the servers, run:

ceph-deploy install HOST [HOST..]

This installs the current default stable release. You can choose a different release track with command line options, for example to use a release candidate:

ceph-deploy install --testing HOST

Or to test a development branch:

ceph-deploy install --dev=wip-mds-now-works-no-kidding HOST [HOST..]

Proxy or Firewall Installs

If attempting to install behind a firewall or through a proxy you can use the --no-adjust-repos that will tell ceph-deploy to skip any changes to the distro's repository in order to install the packages and it will go straight to package installation.

That will allow an environment without internet access to point to its own repositories. This means that those repositories will need to be properly setup (and mirrored with all the necessary dependencies) before attempting an install.

Another alternative is to set the wget env variables to point to the right hosts, for example, put following lines into /root/.wgetrc on each node (since ceph-deploy runs wget as root):

http_proxy=http://host:port
ftp_proxy=http://host:port
https_proxy=http://host:port

Deploying monitors

To actually deploy ceph-mon to the hosts you chose, run:

ceph-deploy mon create HOST [HOST..]

Without explicit hosts listed, hosts in mon_initial_members in the config file are deployed. That is, the hosts you passed to ceph-deploy new are the default value here.

Gather keys

To gather authenticate keys (for administering the cluster and bootstrapping new nodes) to the local directory, run:

ceph-deploy gatherkeys HOST [HOST...]

where HOST is one of the monitor hosts.

Once these keys are in the local directory, you can provision new OSDs etc.

Deploying OSDs

To prepare a node for running OSDs, run:

ceph-deploy osd create HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL] ...]

After that, the hosts will be running OSDs for the given data disks. If you specify a raw disk (e.g., /dev/sdb), partitions will be created and GPT labels will be used to mark and automatically activate OSD volumes. If an existing partition is specified, the partition table will not be modified. If you want to destroy the existing partition table on DISK first, you can include the --zap-disk option.

If there is already a prepared disk or directory that is ready to become an OSD, you can also do:

ceph-deploy osd activate HOST:DIR[:JOURNAL] [...]

This is useful when you are managing the mounting of volumes yourself.

Admin hosts

To prepare a host with a ceph.conf and ceph.client.admin.keyring keyring so that it can administer the cluster, run:

ceph-deploy admin HOST [HOST ...]

Forget keys

The new and gatherkeys put some Ceph authentication keys in keyrings in the local directory. If you are worried about them being there for security reasons, run:

ceph-deploy forgetkeys

and they will be removed. If you need them again later to deploy additional nodes, simply re-run:

ceph-deploy gatherkeys HOST [HOST...]

and they will be retrieved from an existing monitor node.

Multiple clusters

All of the above commands take a --cluster=NAME option, allowing you to manage multiple clusters conveniently from one workstation. For example:

ceph-deploy --cluster=us-west new
vi us-west.conf
ceph-deploy --cluster=us-west mon

FAQ

Before anything

Make sure you have the latest version of ceph-deploy. It is actively developed and releases are coming weekly (on average). The most recent versions of ceph-deploy will have a --version flag you can use, otherwise check with your package manager and update if there is anything new.

Why is feature X not implemented?

Usually, features are added when/if it is sensible for someone that wants to get started with ceph and said feature would make sense in that context. If you believe this is the case and you've read "what this tool is not" and still think feature X should exist in ceph-deploy, open a feature request in the ceph tracker: http://tracker.ceph.com/projects/ceph-deploy/issues

A command gave me an error, what is going on?

Most of the commands for ceph-deploy are meant to be run remotely in a host that you have configured when creating the initial config. If a given command is not working as expected try to run the command that failed in the remote host and assert the behavior there.

If the behavior in the remote host is the same, then it is probably not something wrong with ceph-deploy per-se. Make sure you capture the output of both the ceph-deploy output and the output of the command in the remote host.

Issues with monitors

If your monitors are not starting, make sure that the {hostname} you used when you ran ceph-deploy mon create {hostname} match the actual hostname -s in the remote host.

Newer versions of ceph-deploy should warn you if the results are different but that might prevent the monitors from reaching quorum.

Developing ceph-deploy

Now that you have cracked your teeth on Ceph, you might find that you want to contribute to ceph-deploy.

Resources

Bug tracking: http://tracker.ceph.com/projects/ceph-deploy/issues

Mailing list and IRC info is the same as ceph http://ceph.com/resources/mailing-list-irc/

Submitting Patches

Please add test cases to cover any code you add. You can test your changes by running tox (You will also need mock and pytest ) from inside the git clone

When creating a commit message please use git commit -s or otherwise add Signed-off-by: Your Name <[email protected]> to your commit message.

Patches can then be submitted by a pull request on GitHub.

More Repositories

1

ceph

Ceph is a distributed object, block, and file storage platform
C++
13,026
star
2

ceph-ansible

Ansible playbooks to deploy Ceph, the distributed filesystem.
Python
1,631
star
3

ceph-container

Docker files and images to run Ceph in containers
Shell
1,297
star
4

ceph-csi

CSI driver for Ceph
Go
1,123
star
5

go-ceph

Go bindings for Ceph πŸ™ πŸ™ πŸ™
Go
585
star
6

calamari

Web-based monitoring and management for Ceph
Python
349
star
7

s3-tests

Compatibility tests for S3 clones
Python
261
star
8

cbt

The Ceph Benchmarking Tool
Python
250
star
9

cn

Ceph Nano - One step S3 in container with Ceph.
Go
231
star
10

ceph-client

Ceph kernel client (kernel modules)
C
187
star
11

teuthology

Ceph test suite
Python
153
star
12

calamari-clients

Ceph Manager API Client Code
JavaScript
103
star
13

ceph-cookbook

Chef cookbooks for Ceph
Ruby
101
star
14

dmclock

Code that implements the dmclock distributed quality of service algorithm. See "mClock: Handling Throughput Variability for Hypervisor IO Scheduling" by Gulati, Merchant, and Varman.
C++
88
star
15

cephadm-ansible

ansible playbooks to be used with cephadm
Python
86
star
16

ceph-nagios-plugins

Nagios plugins for Ceph
Python
81
star
17

ceph-nvmeof

Service to provide Ceph storage over NVMe-oF protocol
Python
66
star
18

cephmetrics

ceph metric collectors with collectd integration
Python
64
star
19

gf-complete

this repository is a read only mirror, the upstream is
C
60
star
20

ceph-iscsi

Ceph iSCSI tools
Python
58
star
21

cephfs-hadoop

cephfs-hadoop
Java
57
star
22

romana

JavaScript
51
star
23

ceph-build

Helper scripts for building the official Ceph packages
Shell
41
star
24

qemu-kvm

Ceph RBD support for Qemu/KVM
C
40
star
25

phprados

PHP bindings for the RADOS client library
C
37
star
26

jerasure

this repository is a read only mirror, the upstream is
C
33
star
27

ceph-tools

Misc ceph tools
Python
33
star
28

libs3

Fork of http://git.ischo.com/libs3.git
C
32
star
29

ceph-qa-suite

[DEPRECATED; see ceph.git/qa] Suite of Ceph QA tests to run with Teuthology
Python
32
star
30

ceph-cosi

COSI driver for Ceph Object Store aka RGW
Go
29
star
31

ceph-chef

Chef cookbooks for managing a Ceph cluster
Ruby
29
star
32

ceph-salt

Deploy Ceph clusters using cephadm
Python
29
star
33

downburst

Fast Ubuntu Cloud Image creation on libvirt
Python
27
star
34

ceph-iscsi-cli

NOTICE: moved to https://github.com/ceph/ceph-iscsi
Python
25
star
35

ceph-medic

find common issues in ceph clusters
Python
22
star
36

pulpito

A dashboard for Ceph tests
JavaScript
22
star
37

ceph-iscsi-config

NOTICE: moved to https://github.com/ceph/ceph-iscsi
Python
22
star
38

radosgw-agent

radosgw sync agent
Python
22
star
39

python-crush

C++
20
star
40

ceph-cm-ansible

Ansible configurations for Ceph.com infrastructure
Shell
20
star
41

ceph-mixins

A set of Grafana dashboards and Prometheus alerts for Ceph.
Jsonnet
19
star
42

puppet-ceph

Mirror of stackforge/puppet-ceph
Ruby
18
star
43

ceph-ci

ceph.git clone as source for CI
C++
18
star
44

dpdk

DPDK
C
18
star
45

libcrush

C
15
star
46

ceph-installer

A service to provision Ceph clusters
Python
15
star
47

paddles

RESTful API to store (and report) on Ceph tests
Python
14
star
48

propernoun

Update PowerDNS from DHCP leases and libvirt virtual machines
Python
13
star
49

ceph.io

This repo contains static site content for www.ceph.io
HTML
13
star
50

persistent-volume-migrator

A collection of tools to migrate an ancient Kubernetes Ceph storage driver (in-tree, Flex) to Ceph-CSI
Go
13
star
51

rgw-pubsub-api

RGW PubSub API Clients
Go
13
star
52

obsync

rsync-like utility for syncing bucket data between object storage APIs like S3, Swift
Python
12
star
53

ceph-iscsi-tools

Useful tools for a ceph/iscsi gateway environment
Python
11
star
54

autobuild-ceph

Setup for running gitbuilder for the Ceph project
Shell
11
star
55

simplegpt

Simple Python library to parse GPT (GUID Partition Table) header and entries, useful as a learning tool
Python
11
star
56

ceph-openstack-tools

Tools to develop Ceph/OpenStack integration
Shell
11
star
57

ceph-ruby

Easy management of Ceph Distributed Storage System (rbd, images, rados objects) using ruby.
Ruby
11
star
58

python-jenkins

fork of python-jenkins for https://review.openstack.org/460363
Python
11
star
59

mod_fastcgi

Bugfixes and improves to mod_fastcgi, for use with RADOS Gateway
C
10
star
60

samba

Clone of the main samba repo: git://git.samba.org/samba.git
C
10
star
61

chacra

A binary/file REST API to aid in multi-distro|arch|release management
Python
9
star
62

ceph-client-standalone

Standalone Ceph kernel client -- you probably want https://github.com/NewDreamNetwork/ceph-client instead
C
9
star
63

barclamp-ceph

Crowbar Barclamp for installing Ceph clusters
Ruby
8
star
64

blkin

C++
8
star
65

shaman

source of truth for the state of repositories on Chacra nodes
Python
8
star
66

csi-charts

csi-charts
8
star
67

ceph-erasure-code-corpus

Objects erasure encoded by Ceph
Shell
8
star
68

apache2

A version of Apache HTTP Server with fixes for use with RADOS Gateway
C
7
star
69

ceph-qa-chef

Chef cookbooks used in Ceph QA jobs. (This is deprecated; please see ceph-cm-ansible instead.)
Ruby
7
star
70

ceph-kmod-rpm

kabi-tracking kmod RPMs for libceph, CephFS, and RDB for RHEL 7
7
star
71

mod-proxy-fcgi

mod_proxy_fcgi for apache 2.2
C
6
star
72

ceph-devstack

DevStack files
Shell
6
star
73

spawn

C++
6
star
74

leveldb

Fork of the LevelDB project
C++
5
star
75

gmock

C++
5
star
76

cn-core

Bootstrap Ceph AIO - source of cn project
Go
5
star
77

keys

SSH and other keys used by the project, mostly in the Sepia lab
Shell
5
star
78

qemu-iotests

Shell
5
star
79

ceph-autotests

HISTORICAL value only: Autotest helper for Ceph QA (obsolete)
Python
4
star
80

mita

Jenkins Slave orchestration service
Python
4
star
81

collectd-4.10.1

A version of collectd that supports monitoring Ceph clusters (on top of the Debian 4.10.1-1+squeeze2 package)
C
4
star
82

asphyxiate

Grab source code documentation via Doxygen into a Sphinx document
Python
4
star
83

ceph-nagios-plugin

A Nagios plugin that checks the health of a ceph cluster.
Perl
4
star
84

cookbook-vercoi

Chef Solo recipes used to bring up KVM hypervisors in the Sepia lab
Ruby
4
star
85

prado

Prado is a webservice that provides a single script to run Ansible playbooks
Python
4
star
86

handle_core

A userspace core file handler for Linux
C
4
star
87

ceph-telemetry

Python
4
star
88

rook-client-python

Python bindings for Rook-Ceph CRDs
Python
3
star
89

ceph-kdump-copy

ceph kdump handler
Shell
3
star
90

jenkins-slave-chef

Chef to setup jenkins slaves (pbuilder/regular).
Ruby
3
star
91

merfi

Finds and signs files with different signing tools (gpg, rpm-sign)
Python
3
star
92

run-crowbar-on-sepia

Quick & dirty script to run Crowbar on Sepia
Python
3
star
93

ceph-notes

3
star
94

ceph-object-corpus

corpus of encoded ceph structures
Shell
3
star
95

libkmip

C
3
star
96

cookbook-vm-general

Python
2
star
97

munging_http_proxy

Munging HTTP proxy, for developing and debugging
Python
2
star
98

javaws-kludge

Kludges to make working with Dell iDRAC remote consoles nicer
Python
2
star
99

sepia

Notes on the test lab for the Ceph project
Shell
2
star
100

bubbles

Bubbles is an experiment on a simplified management UI for Ceph.
TypeScript
2
star