• Stars
    star
    13,799
  • Rank 2,226 (Top 0.05 %)
  • Language
    C++
  • License
    Other
  • Created over 13 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Ceph is a distributed object, block, and file storage platform

Ceph - a scalable distributed storage system

See https://ceph.com/ for current information about Ceph.

Contributing Code

Most of Ceph is dual-licensed under the LGPL version 2.1 or 3.0. Some miscellaneous code is either public domain or licensed under a BSD-style license.

The Ceph documentation is licensed under Creative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0).

Some headers included in the ceph/ceph repository are licensed under the GPL. See the file COPYING for a full inventory of licenses by file.

All code contributions must include a valid "Signed-off-by" line. See the file SubmittingPatches.rst for details on this and instructions on how to generate and submit patches.

Assignment of copyright is not required to contribute code. Code is contributed under the terms of the applicable license.

Checking out the source

Clone the ceph/ceph repository from github by running the following command on a system that has git installed:

git clone [email protected]:ceph/ceph

Alternatively, if you are not a github user, you should run the following command on a system that has git installed:

git clone https://github.com/ceph/ceph.git

When the ceph/ceph repository has been cloned to your system, run the following commands to move into the cloned ceph/ceph repository and to check out the git submodules associated with it:

cd ceph
git submodule update --init --recursive --progress

Build Prerequisites

section last updated 27 Jul 2023

Make sure that curl is installed. The Debian and Ubuntu apt command is provided here, but if you use a system with a different package manager, then you must use whatever command is the proper counterpart of this one:

apt install curl

Install Debian or RPM package dependencies by running the following command:

./install-deps.sh

Install the python3-routes package:

apt install python3-routes

Building Ceph

These instructions are meant for developers who are compiling the code for development and testing. To build binaries that are suitable for installation we recommend that you build .deb or .rpm packages, or refer to ceph.spec.in or debian/rules to see which configuration options are specified for production builds.

To build Ceph, make sure that you are in the top-level ceph directory that contains do_cmake.sh and CONTRIBUTING.rst and run the following commands:

./do_cmake.sh
cd build
ninja

do_cmake.sh by default creates a "debug build" of Ceph, which can be up to five times slower than a non-debug build. Pass -DCMAKE_BUILD_TYPE=RelWithDebInfo to do_cmake.sh to create a non-debug build.

Ninja is the buildsystem used by the Ceph project to build test builds. The number of jobs used by ninja is derived from the number of CPU cores of the building host if unspecified. Use the -j option to limit the job number if the build jobs are running out of memory. If you attempt to run ninja and receive a message that reads g++: fatal error: Killed signal terminated program cc1plus, then you have run out of memory. Using the -j option with an argument appropriate to the hardware on which the ninja command is run is expected to result in a successful build. For example, to limit the job number to 3, run the command ninja -j 3. On average, each ninja job run in parallel needs approximately 2.5 GiB of RAM.

This documentation assumes that your build directory is a subdirectory of the ceph.git checkout. If the build directory is located elsewhere, point CEPH_GIT_DIR to the correct path of the checkout. Additional CMake args can be specified by setting ARGS before invoking do_cmake.sh. See cmake options for more details. For example:

ARGS="-DCMAKE_C_COMPILER=gcc-7" ./do_cmake.sh

To build only certain targets, run a command of the following form:

ninja [target name]

To install:

ninja install

CMake Options

The -D flag can be used with cmake to speed up the process of building Ceph and to customize the build.

Building without RADOS Gateway

The RADOS Gateway is built by default. To build Ceph without the RADOS Gateway, run a command of the following form:

cmake -DWITH_RADOSGW=OFF [path to top-level ceph directory]

Building with debugging and arbitrary dependency locations

Run a command of the following form to build Ceph with debugging and alternate locations for some external dependencies:

cmake -DCMAKE_INSTALL_PREFIX=/opt/ceph -DCMAKE_C_FLAGS="-Og -g3 -gdwarf-4" \
..

Ceph has several bundled dependencies such as Boost, RocksDB and Arrow. By default, cmake builds these bundled dependencies from source instead of using libraries that are already installed on the system. You can opt to use these system libraries, as long as they meet Ceph's version requirements. To use system libraries, use cmake options like WITH_SYSTEM_BOOST, as in the following example:

cmake -DWITH_SYSTEM_BOOST=ON [...]

To view an exhaustive list of -D options, invoke cmake -LH:

cmake -LH

Preserving diagnostic colors

If you pipe ninja to less and would like to preserve the diagnostic colors in the output in order to make errors and warnings more legible, run the following command:

cmake -DDIAGNOSTICS_COLOR=always ...

The above command works only with supported compilers.

The diagnostic colors will be visible when the following command is run:

ninja | less -R

Other available values for DIAGNOSTICS_COLOR are auto (default) and never.

Building a source tarball

To build a complete source tarball with everything needed to build from source and/or build a (deb or rpm) package, run

./make-dist

This will create a tarball like ceph-$version.tar.bz2 from git. (Ensure that any changes you want to include in your working directory are committed to git.)

Running a test cluster

From the ceph/ directory, run the following commands to launch a test Ceph cluster:

cd build
ninja vstart        # builds just enough to run vstart
../src/vstart.sh --debug --new -x --localhost --bluestore
./bin/ceph -s

Most Ceph commands are available in the bin/ directory. For example:

./bin/rbd create foo --size 1000
./bin/rados -p foo bench 30 write

To shut down the test cluster, run the following command from the build/ directory:

../src/stop.sh

Use the sysvinit script to start or stop individual daemons:

./bin/init-ceph restart osd.0
./bin/init-ceph stop

Running unit tests

To build and run all tests (in parallel using all processors), use ctest:

cd build
ninja
ctest -j$(nproc)

(Note: Many targets built from src/test are not run using ctest. Targets starting with "unittest" are run in ninja check and thus can be run with ctest. Targets starting with "ceph_test" can not, and should be run by hand.)

When failures occur, look in build/Testing/Temporary for logs.

To build and run all tests and their dependencies without other unnecessary targets in Ceph:

cd build
ninja check -j$(nproc)

To run an individual test manually, run ctest with -R (regex matching):

ctest -R [regex matching test name(s)]

(Note: ctest does not build the test it's running or the dependencies needed to run it)

To run an individual test manually and see all the tests output, run ctest with the -V (verbose) flag:

ctest -V -R [regex matching test name(s)]

To run tests manually and run the jobs in parallel, run ctest with the -j flag:

ctest -j [number of jobs]

There are many other flags you can give ctest for better control over manual test execution. To view these options run:

man ctest

Building the Documentation

Prerequisites

The list of package dependencies for building the documentation can be found in doc_deps.deb.txt:

sudo apt-get install `cat doc_deps.deb.txt`

Building the Documentation

To build the documentation, ensure that you are in the top-level /ceph directory, and execute the build script. For example:

admin/build-doc

Reporting Issues

To report an issue and view existing issues, please visit https://tracker.ceph.com/projects/ceph.

More Repositories

1

ceph-ansible

Ansible playbooks to deploy Ceph, the distributed filesystem.
Python
1,685
star
2

ceph-container

Docker files and images to run Ceph in containers
Shell
1,316
star
3

ceph-csi

CSI driver for Ceph
Go
1,236
star
4

go-ceph

Go bindings for Ceph 🐙 🐙 🐙
Go
610
star
5

ceph-deploy

Deploy Ceph with minimal infrastructure, using just SSH access
Python
421
star
6

calamari

Web-based monitoring and management for Ceph
Python
349
star
7

cbt

The Ceph Benchmarking Tool
Python
266
star
8

s3-tests

Compatibility tests for S3 clones
Python
261
star
9

cn

Ceph Nano - One step S3 in container with Ceph.
Go
235
star
10

ceph-client

Ceph kernel client (kernel modules)
C
187
star
11

teuthology

Ceph test suite
Python
153
star
12

cephadm-ansible

ansible playbooks to be used with cephadm
Python
112
star
13

calamari-clients

Ceph Manager API Client Code
JavaScript
102
star
14

ceph-cookbook

Chef cookbooks for Ceph
Ruby
101
star
15

ceph-rust

Rust-lang interface to Ceph.
Rust
92
star
16

dmclock

Code that implements the dmclock distributed quality of service algorithm. See "mClock: Handling Throughput Variability for Hypervisor IO Scheduling" by Gulati, Merchant, and Varman.
C++
88
star
17

ceph-nagios-plugins

Nagios plugins for Ceph
Python
82
star
18

ceph-nvmeof

Service to provide Ceph storage over NVMe-oF/TCP protocol
Python
76
star
19

cephmetrics

ceph metric collectors with collectd integration
Python
64
star
20

gf-complete

this repository is a read only mirror, the upstream is
C
62
star
21

ceph-iscsi

Ceph iSCSI tools
Python
61
star
22

cephfs-hadoop

cephfs-hadoop
Java
57
star
23

romana

JavaScript
51
star
24

ceph-build

Helper scripts for building the official Ceph packages
Shell
44
star
25

qemu-kvm

Ceph RBD support for Qemu/KVM
C
40
star
26

phprados

PHP bindings for the RADOS client library
C
37
star
27

ceph-cosi

COSI driver for Ceph Object Store aka RGW
Go
34
star
28

jerasure

this repository is a read only mirror, the upstream is
C
33
star
29

ceph-tools

Misc ceph tools
Python
33
star
30

ceph-qa-suite

[DEPRECATED; see ceph.git/qa] Suite of Ceph QA tests to run with Teuthology
Python
33
star
31

libs3

Fork of http://git.ischo.com/libs3.git
C
32
star
32

ceph-salt

Deploy Ceph clusters using cephadm
Python
31
star
33

ceph-chef

Chef cookbooks for managing a Ceph cluster
Ruby
29
star
34

downburst

Fast Ubuntu Cloud Image creation on libvirt
Python
27
star
35

ceph-iscsi-cli

NOTICE: moved to https://github.com/ceph/ceph-iscsi
Python
25
star
36

ceph-medic

find common issues in ceph clusters
Python
22
star
37

pulpito

A dashboard for Ceph tests
JavaScript
22
star
38

ceph-iscsi-config

NOTICE: moved to https://github.com/ceph/ceph-iscsi
Python
22
star
39

radosgw-agent

radosgw sync agent
Python
22
star
40

ceph-cm-ansible

Ansible configurations for Ceph.com infrastructure
Shell
21
star
41

python-crush

C++
20
star
42

ceph-mixins

A set of Grafana dashboards and Prometheus alerts for Ceph.
Jsonnet
20
star
43

ceph-ci

ceph.git clone as source for CI
C++
20
star
44

dpdk

DPDK
C
19
star
45

puppet-ceph

Mirror of stackforge/puppet-ceph
Ruby
17
star
46

ceph.io

This repo contains static site content for www.ceph.io
HTML
16
star
47

libcrush

C
16
star
48

ceph-installer

A service to provision Ceph clusters
Python
15
star
49

paddles

RESTful API to store (and report) on Ceph tests
Python
14
star
50

persistent-volume-migrator

A collection of tools to migrate an ancient Kubernetes Ceph storage driver (in-tree, Flex) to Ceph-CSI
Go
14
star
51

propernoun

Update PowerDNS from DHCP leases and libvirt virtual machines
Python
13
star
52

rgw-pubsub-api

RGW PubSub API Clients
Go
13
star
53

ceph-csi-operator

Kubernetes operator for managing the CephCSI plugins
Go
13
star
54

obsync

rsync-like utility for syncing bucket data between object storage APIs like S3, Swift
Python
12
star
55

ceph-iscsi-tools

Useful tools for a ceph/iscsi gateway environment
Python
11
star
56

autobuild-ceph

Setup for running gitbuilder for the Ceph project
Shell
11
star
57

ceph-openstack-tools

Tools to develop Ceph/OpenStack integration
Shell
11
star
58

simplegpt

Simple Python library to parse GPT (GUID Partition Table) header and entries, useful as a learning tool
Python
11
star
59

ceph-ruby

Easy management of Ceph Distributed Storage System (rbd, images, rados objects) using ruby.
Ruby
11
star
60

python-jenkins

fork of python-jenkins for https://review.openstack.org/460363
Python
11
star
61

mod_fastcgi

Bugfixes and improves to mod_fastcgi, for use with RADOS Gateway
C
10
star
62

samba

Clone of the main samba repo: git://git.samba.org/samba.git
C
10
star
63

csi-charts

csi-charts
9
star
64

chacra

A binary/file REST API to aid in multi-distro|arch|release management
Python
9
star
65

ceph-erasure-code-corpus

Objects erasure encoded by Ceph
Shell
9
star
66

ceph-client-standalone

Standalone Ceph kernel client -- you probably want https://github.com/NewDreamNetwork/ceph-client instead
C
9
star
67

barclamp-ceph

Crowbar Barclamp for installing Ceph clusters
Ruby
8
star
68

blkin

C++
8
star
69

shaman

source of truth for the state of repositories on Chacra nodes
Python
8
star
70

apache2

A version of Apache HTTP Server with fixes for use with RADOS Gateway
C
7
star
71

ceph-qa-chef

Chef cookbooks used in Ceph QA jobs. (This is deprecated; please see ceph-cm-ansible instead.)
Ruby
7
star
72

ceph-kmod-rpm

kabi-tracking kmod RPMs for libceph, CephFS, and RDB for RHEL 7
7
star
73

mod-proxy-fcgi

mod_proxy_fcgi for apache 2.2
C
6
star
74

ceph-devstack

DevStack files
Shell
6
star
75

spawn

C++
6
star
76

leveldb

Fork of the LevelDB project
C++
5
star
77

keys

SSH and other keys used by the project, mostly in the Sepia lab
Shell
5
star
78

gmock

C++
5
star
79

cn-core

Bootstrap Ceph AIO - source of cn project
Go
5
star
80

qemu-iotests

Shell
5
star
81

ceph-autotests

HISTORICAL value only: Autotest helper for Ceph QA (obsolete)
Python
4
star
82

mita

Jenkins Slave orchestration service
Python
4
star
83

asphyxiate

Grab source code documentation via Doxygen into a Sphinx document
Python
4
star
84

collectd-4.10.1

A version of collectd that supports monitoring Ceph clusters (on top of the Debian 4.10.1-1+squeeze2 package)
C
4
star
85

ceph-nagios-plugin

A Nagios plugin that checks the health of a ceph cluster.
Perl
4
star
86

cookbook-vercoi

Chef Solo recipes used to bring up KVM hypervisors in the Sepia lab
Ruby
4
star
87

prado

Prado is a webservice that provides a single script to run Ansible playbooks
Python
4
star
88

handle_core

A userspace core file handler for Linux
C
4
star
89

ceph-object-corpus

corpus of encoded ceph structures
Shell
4
star
90

ceph-telemetry

Python
4
star
91

jenkins-slave-chef

Chef to setup jenkins slaves (pbuilder/regular).
Ruby
3
star
92

merfi

Finds and signs files with different signing tools (gpg, rpm-sign)
Python
3
star
93

ceph-kdump-copy

ceph kdump handler
Shell
3
star
94

run-crowbar-on-sepia

Quick & dirty script to run Crowbar on Sepia
Python
3
star
95

ceph-notes

3
star
96

libkmip

C
3
star
97

cephmetrics-osp

Shell
2
star
98

ceph-brag

Ceph performance testing results repository.
Python
2
star
99

munging_http_proxy

Munging HTTP proxy, for developing and debugging
Python
2
star
100

cookbook-vm-general

Python
2
star