• This repository has been archived on 29/Aug/2018
  • Stars
    star
    407
  • Rank 105,570 (Top 3 %)
  • Language
    Go
  • License
    Other
  • Created over 10 years ago
  • Updated almost 9 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

geard is no longer maintained - see OpenShift 3 and Kubernetes

geard Build Status

geard is a command line client for installing Docker images as containers onto a systemd-enabled Linux operating system (systemd 207 or newer). It may be run as a command:

$ sudo gear install openshift/busybox-http-app my-sample-service

to install the public image openshift/busybox-http-app to systemd on the local machine with the service name "ctr-my-sample-service". The command can also start as a daemon and serve API requests over HTTP (default port 43273) :

$ sudo gear daemon
2014/02/21 02:59:42 ports: searching block 41, 4000-4099
2014/02/21 02:59:42 Starting HTTP on :43273 ...

The gear CLI can connect to this agent:

$ gear stop localhost/my-sample-service
$ gear install openshift/busybox-http-app localhost/my-sample-service.1 localhost/my-sample-service.2
$ gear start localhost/my-sample-service.1 localhost/my-sample-service.2

The geard agent exposes operations on containers needed for large scale orchestration in production environments, and tries to map those operations closely to the underlying concepts in Docker and systemd. It supports linking containers into logical groups (applications) across multiple hosts with iptables based local networking, shared environment files, and SSH access to containers. It is also a test bed for prototyping related container services that may eventually exist as Docker plugins, such as routing, event notification, and efficient idling and network activation.

The gear daemon and local commands must run as root to interface with the Docker daemon over its Unix socket and systemd over DBus.

What is a "gear", and why Docker?

In OpenShift Origin, a gear is a secure, isolated environment for user processes using cGroups and SELinux. As Linux namespace technology has evolved to provide other means of constraining processes, the term "container" has become prevalent, and is used interchangeably below. Docker has made the creation and distribution of container images effortless, and the ability to reproducibly run a Linux application in many environments is a key component for developers and administrators. At the same time, the systemd process manager has unified many important Linux process subsystems (logging, audit, managing and monitoring processes, and controlling cGroups) into a reliable and consistent whole.

What are the key requirements for production containers?

  • Containers are securely isolated from the host except through clear interfaces

    By default, a container should only see what the host allows - being able to become root within a container is extremely valuable for installing packaged software, but that is also a significant security concern. Both user namespaces and SELinux are key components to protecting the host from arbitrary code, and should be secure by default within Docker. However, as necessary administrators should be able to expose system services or other containers to a container. Other limits include network abstractions and quota restrictions on the files containers create.

  • Container processes should be independent and resilient to failure

    Processes fail, become corrupted, and die. Those failures should be isolated and recoverable - a key feature of systemd is its comprehensive ability to handle the wide variety of process death and restart, recover, limit, and track the involved processes. The failure of other components within the system should not block restarting or reinitializing other containers to the extent possible, especially in bulk.

  • Containers should be portable across hosts

    A Docker image should be reusable across hosts. This means that the underlying Docker abstractions (links, port mappings, environment files) should be used to ensure the gear does not become dependent on the host system except where necessary. The system should make it easy to share environment and context between gears and move or recreate them among host systems.

  • Containers must be auditable, constrained, and reliably logged

    Many of the most important characteristics of Linux security are difficult to enforce on arbitrary processes. systemd provides standard patterns for each of these and when properly integrated with Docker can give administrators in multi-tenant or restricted environments peace of mind.

Actions on a container

Here are the supported container actions on the agent - these should map cleanly to Docker, systemd, or a very simple combination of the two. Extensions are intended to simplify cross container actions (shared environment and links)

  • Create a new system unit file that runs a single docker image (install and start a container)

    $ gear install openshift/busybox-http-app localhost/my-sample-service --start -p 8080:0
    
    $ curl -X PUT "http://localhost:43273/container/my-sample-service" -H "Content-Type: application/json" -d '{"Image": "openshift/busybox-http-app", "Started":true, "Ports":[{"Internal":8080}]}'
    
  • Stop, start, and restart a container

    $ gear stop localhost/my-sample-service
    $ gear start localhost/my-sample-service
    $ gear restart localhost/my-sample-service
    
    $ curl -X PUT "http://localhost:43273/container/my-sample-service/stopped"
    $ curl -X PUT "http://localhost:43273/container/my-sample-service/started"
    $ curl -X POST "http://localhost:43273/container/my-sample-service/restart"
    
  • Deploy a set of containers on one or more systems, with links between them:

    # create a simple two container web app
    $ gear deploy deployment/fixtures/simple_deploy.json localhost
    

    Deploy creates links between the containers with iptables - use nsenter to join the container web-1 and try curling 127.0.0.1:8081 to connect to the second web container. These links are stable across hosts and can be changed without the container knowing.

    # create a mongo db replica set (some assembly required)
    $ gear deploy deployment/fixtures/mongo_deploy.json localhost
    $ sudo switchns --container=db-1 -- /bin/bash
    > mongo 192.168.1.1
    MongoDB shell version: 2.4.9
    > rs.initiate({_id: "replica0", version: 1, members:[{_id: 0, host:"192.168.1.1:27017"}]})
    > rs.add("192.168.1.2")
    > rs.add("192.168.1.3")
    > rs.status()
    # wait....
    > rs.status()
    

    Note: The argument to initiate() sets the correct hostname for the first member, otherwise the other members cannot connect.

  • View the systemd status of a container

    $ gear status localhost/my-sample-service
    $ curl "http://localhost:43273/container/my-sample-service/status"
    
  • Tail the logs for a container (will end after 30 seconds)

    $ curl -H "Accept: text/plain;stream=true" "http://localhost:43273/container/my-sample-service/log"
    
  • List all installed containers (for one or more servers)

    $ gear list-units localhost
    $ curl "http://localhost:43273/containers"
    
  • Perform housekeeping cleanup on the geard directories

    $ gear clean
    
  • Create a new empty Git repository

    $ curl -X PUT "http://localhost:43273/repository/my-sample-repo"
    
  • Link containers with local loopback ports (for e.g. 127.0.0.2:8081 -> 9.8.23.14:8080). If local ip isn't specified, it defaults to 127.0.0.1

    $ gear link -n=127.0.0.2:8081:9.8.23.14:8080 localhost/my-sample-service
    $ curl -X PUT -H "Content-Type: application/json" "http://localhost:43273/container/my-sample-service" -d '{"Image": "openshift/busybox-http-app", "Started":true, "Ports":[{"Internal":8080}], "NetworkLinks": [{"FromHost": "127.0.0.1","FromPort": 8081, "ToHost": "9.8.23.14","ToPort": 8080}]}'
    
  • Set a public key as enabling SSH or Git SSH access to a container or repository (respectively)

    $ gear add-keys --key-file=[FILE] my-sample-service
    $ curl -X POST "http://localhost:43273/keys" -H "Content-Type: application/json" -d '{"Keys": [{"Type":"authorized_keys","Value":"ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ=="}], "Containers": [{"Id": "my-sample-service"}]}'
    
  • Enable SSH access to join a container for a set of authorized keys

    # Make sure that /etc/ssh/sshd_config has the following two lines.
    AuthorizedKeysCommand /usr/sbin/gear-auth-keys-command
    AuthorizedKeysCommandUser nobody
    
    # Restart sshd to pickup the changes.
    $ systemctl restart sshd.service
    
    # Install and start a container in isolate mode which is necessary for SSH.
    $ gear install openshift/busybox-http-app testapp1 --isolate --start
    
    # Add ssh keys.
    $ gear add-keys --key-file=/path/to/id_rsa.pub testapp1
    
    # ssh into the container.
    # Note: All of this works with a remote host as well.
    $ ssh ctr-testapp1@localhost
    2014/05/01 12:48:21 docker: execution driver native-0.1
    bash-4.2$ id
    uid=1019(container) gid=1019(container) groups=1019(container)
    bash-4.2$ ps -ef
    UID        PID  PPID  C STIME TTY          TIME CMD
    root         1     0  0 19:39 ?        00:00:00 su container -s /bin/bash -c /.container.cmd
    contain+    16     1  0 19:39 ?        00:00:00 /usr/bin/ruby-mri /usr/mock/mock_server.rb 0.0.0.0 /usr/mock  /source/
    contain+    22     0  0 19:48 ?        00:00:00 /bin/bash -l
    contain+    24    22  0 19:48 ?        00:00:00 ps -ef
    bash-4.2$
    
  • Build a new image using Source-to-Images (STI) from a source URL and base image

    # build an image on the local system and tag it as mybuild-1
    $ gear build git://github.com/pmorie/simple-html pmorie/fedora-mock mybuild-1
    
    # remote build
    $ curl -X POST "http://localhost:43273/build-image" -H "Content-Type: application/json" -d '{"BaseImage":"pmorie/fedora-mock","Source":"git://github.com/pmorie/simple-html","Tag":"mybuild-1"}'
    
  • Use Git repositories on the geard host

    # Create a repository on the host.
    # Note: We are using localhost for this example, however, it works for remote hosts as well.
    $ gear create-repo myrepo [<optional source url to clone>]
    
    # Add keys to grant access to the repository.
    # The write flag enables git push, otherwise only git clone i.e. read is allowed.
    $ gear add-keys --write=true --key-file=/path/to/id_rsa.pub repo://myrepo
    
    # Clone the repo.
    $ git clone git-myrepo@localhost:~ myrepo
    
    # Commit some changes locally.
    $ cd myrepo
    $ echo "Hello" > hi.txt
    $ git add hi.txt
    $ git commit -m "Add simple text file."
    
    # Push the changes to the repository on the host.
    $ git push origin master
    
  • Fetch a Git archive zip for a repository

    $ curl "http://localhost:43273/repository/my-sample-repo/archive/master"
    
  • Set and retrieve environment files for sharing between containers (patch and pull operations)

    $ gear set-env localhost/my-sample-service A=B B=C
    $ gear env localhost/my-sample-service
    $ curl "http://localhost:43273/environment/my-sample-service"
    $ gear set-env localhost/my-sample-service --reset
    

    You can set environment during installation

    $ gear install ccoleman/envtest localhost/env-test1 --env-file=deployment/fixtures/simple.env
    

    Loading environment into a running container is dependent on the "docker run --env-file" option in Docker master from 0.9.x after April 1st. You must start the daemon with "gear daemon --has-env-file" in order to use the option - this option will be made the default after 0.9.1 lands and the minimal requirements will be updated.

  • More to come....

geard allows an administrator to easily ensure a given Docker container will always run on the system by creating a systemd unit describing a docker run command. It will execute the Docker container processes as children of the systemd unit, allowing auto restart of the container, customization of additional namespace options, the capture stdout and stderr to journald, and audit/seccomp integration to those child processes. Note that foreground execution is currently not in Docker master - see https://github.com/alexlarsson/docker/tree/forking-run for some prototype work demonstrating the concept.

Each created systemd unit can be assigned a unique Unix user for quota and security purposes with the --isolate flag, which prototypes isolation prior to user namespaces being part of Docker. An SELinux MCS category label will automatically be assigned to the container to separate it from the other containers on the system, and containers can be set into systemd slices with resource constraints.

Try it out

The geard code depends on:

  • systemd 207 (Fedora 20 or newer)
  • Docker 0.7 or newer (0.9.x from Apr 1 to use --env-file, various other experimental features not in tree)

If you don't have those, you can use the following to run in a development vm:

  • Vagrant
  • VirtualBox

If you have Go installed locally (have a valid GOPATH env variable set), run:

go get github.com/openshift/geard
cd $GOPATH/src/github.com/openshift/geard
vagrant up

If you don't have Go installed locally, run the following steps:

git clone [email protected]:openshift/geard && cd geard
vagrant up

vagrant up will install a few RPMs the first time it is started. Once vagrant up is running, you can ssh into the vm:

vagrant ssh

The contrib/build script checks and downloads Go dependencies, builds the gear binary, and then installs it to /vagrant/bin/gear and /usr/bin/gear. It has a few flags - '-s' builds with SELinux support for SSH and Git.

contrib/build -s

Once you've built the executables, you can run:

sudo $GOPATH/bin/gear daemon

to start the gear agent. The agent will listen on port 43273 by default and print logs to the console - hit CTRL+C to stop the agent.

See contrib/example.sh and contrib/stress.sh for more examples of API calls.

An example systemd unit file for geard is included in the contrib/ directory. After building, the following commands will install the unit file and start the agent under systemd:

sudo systemctl enable $(pwd)/contrib/geard.service
sudo systemctl start geard.service

Report issues and contribute

Bugs are tracked by the Red Hat and OpenShift test teams in Bugzilla in the geard component, but you can always open a GitHub issue as well.

We are glad to accept pull requests for fixes, features, and experimentation. A rough map of the code structure is available here.

To submit a pull request, ensure your commit has a good description of the problem and contains a test case where possible for your function. We use Travis to perform builds and test pull requests - if your pull request fails Travis we'll try to help you get it fixed.

To run the test suite locally, from your development machine or VM, run:

$ contrib/test

to run just the unit tests. To run the full integration suite you should run:

$ contrib/build -g
$ contrib/test -a

which will build the current source, restart the gear daemon service under systemd, and then run both unit tests and integration tests. Be aware that at the current time a few of the integration tests fail occasionally due to race conditions - we hope to address that soon. Just retry! :)

How can geard be used in orchestration?

See the orchestrating geard doc

API Design

See the API design doc

Disk Structure

Description of storage on disk

geard Concepts

Outline of how some of the core operations work:

  • Linking - use iptable rules and environment variables to simplify container interconnect
  • SSH - generate authorized_keys for a user on demand
  • Isolated container - start an arbitrary image and force it to run as a given user on the host by chown the image prior to execution
  • Idling - use iptable rules to wake containers on SYN packets
  • Git - host Git repositories inside a running Docker container
  • Logs - stream journald log entries to clients
  • Builds - use transient systemd units to execute a build inside a container
  • Jobs - run one-off jobs as systemd transient units and extract their logs and output after completion

Not yet prototyped:

  • Integrated health check - mark containers as available once a pluggable/configurable health check passes
  • Joining - reconnect to an already running operation
  • Direct server to server image pulls - allow hosts to act as a distributed registry
  • Job callbacks - invoke a remote endpoint after an operation completes
  • Local routing - automatically distribute config for inbound and outbound proxying via HAProxy
  • Repair - cleanup and perform consistency checks on stored data (most operations assume some cleanup)
  • Capacity reporting - report capacity via API calls, allow precondition PUTs based on remaining capacity ("If-Match: capacity>=5"), allow capacity to be defined via config

Building Images

geard uses Source-to-Images (STI) to build deployable images from a base image and application source. STI supports a number of use cases for building deployable images, including:

  1. Use a git repository as a source
  2. Incremental builds: downloaded dependencies and generated artifacts are re-used across builds

A number of public STI base images exist:

  1. Ruby 1.9 on CentOS 6.x
  2. Wildfly 8 on CentOS 6.x
  3. WEBrick, a simple Ruby HTTP server, on latest Fedora

See the STI docs if you want to create your own STI base image.

License

Apache Software License (ASL) 2.0.

More Repositories

1

origin

Conformance test suite for OpenShift
Go
8,372
star
2

source-to-image

A tool for building artifacts from source and injecting into container images
Go
2,379
star
3

openshift-ansible

Install and config an OpenShift 3.x cluster
Python
2,136
star
4

osin

Golang OAuth2 server library
Go
1,832
star
5

installer

Install an OpenShift 4.x cluster
1,312
star
6

okd

The self-managing, auto-upgrading, Kubernetes distribution for everyone
HCL
1,276
star
7

origin-server

OpenShift 2 (deprecated)
Ruby
885
star
8

openshift-docs

OpenShift 3 and 4 product and community documentation
HTML
687
star
9

microshift

A small form factor OpenShift/Kubernetes optimized for edge computing
Go
528
star
10

hypershift

Hyperscale OpenShift - clusters with hosted control planes
Go
347
star
11

console

OpenShift Cluster Console UI
TypeScript
334
star
12

openshift-ansible-contrib

Additional roles and playbooks for OpenShift installation and management
Python
284
star
13

training

Shell
280
star
14

pipelines-tutorial

A step-by-step tutorial showing OpenShift Pipelines
Shell
272
star
15

jenkins

Shell
260
star
16

release

Release tooling for OpenShift
Shell
229
star
17

ansible-service-broker

Ansible Service Broker
Go
226
star
18

hive

API driven OpenShift cluster provisioning and management
Go
222
star
19

rhc

OpenShift 2 Client Tools (deprecated)
Ruby
220
star
20

machine-config-operator

Go
218
star
21

jenkins-client-plugin

Java
218
star
22

cluster-monitoring-operator

Manage the OpenShift monitoring stack
Go
218
star
23

openshift-restclient-python

Python client for the OpenShift API
Python
205
star
24

library

Examples and Components for deploying into OpenShift
Go
163
star
25

openshift-tools

A public repository of scripts used by OpenShift Operations for various purposes
Python
161
star
26

enhancements

Enhancements tracking repository for OKD
Go
156
star
27

generic-admission-server

A library for writing admission webhooks based on k8s.io/apiserver
Go
153
star
28

oc

The OpenShift Command Line, part of OKD
Go
150
star
29

origin-aggregated-logging

JavaScript
142
star
30

machine-api-operator

Machine API operator
Go
137
star
31

origin-web-console

Web Console for the OpenShift Application Platform
JavaScript
123
star
32

svt

Shell
117
star
33

imagebuilder

Builds Dockerfile using the Docker client (with squashing! and secrets!)
Go
116
star
34

client-go

Go client for OpenShift
Shell
104
star
35

compliance-operator

Operator providing OpenShift cluster compliance checks
Go
100
star
36

telemeter

Prometheus push federation
Go
96
star
37

assisted-service

Go
90
star
38

cluster-network-operator

Create and manage cluster networking configuration
Go
85
star
39

cincinnati

Rust
84
star
40

vagrant-openshift

Ruby
83
star
41

cluster-logging-operator

Operator to support logging subsystem of OpenShift
Go
83
star
42

openshift-pep

Public Project Enhancement Proposals for the OpenShift product. Tracks and maintains high level architectural documents related to future OpenShift changes.
82
star
43

jenkins-plugin

Java
81
star
44

sriov-network-operator

SR-IOV Network Operator
Go
81
star
45

origin-metrics

Shell
78
star
46

openshift-restclient-java

Java
78
star
47

must-gather

A client tool for gathering information about an operator managed component.
Shell
77
star
48

api

Canonical location of the OpenShift API definition.
Go
75
star
49

sdn

Go
73
star
50

cluster-version-operator

Go
72
star
51

library-go

Helpers for going from apis and clients to useful runtime constructs
Go
72
star
52

os

Shell
71
star
53

cluster-etcd-operator

Operator to manage the lifecycle of the etcd members of an OpenShift cluster
Go
70
star
54

cluster-node-tuning-operator

Manage node-level tuning by orchestrating the tuned daemon.
Go
69
star
55

openshift-sdn

Go
69
star
56

openshift-origin-design

Design repository for all things OpenShift
SCSS
68
star
57

autoheal

Autoheals based on monitoring alerts
Go
66
star
58

elasticsearch-operator

Go
66
star
59

cluster-kube-apiserver-operator

The kube-apiserver operator installs and maintains the kube-apiserver on a cluster
Go
65
star
60

oadp-operator

OADP Operator
Go
64
star
61

openshift-java-client

Java Client for the OpenShift REST API
Java
63
star
62

rosa

Go
61
star
63

ovn-kubernetes

Kubernetes integration for OVN
Go
61
star
64

community

Community organizational documentations and process for OKD
61
star
65

federation-dev

Dev preview of federation
Shell
59
star
66

cluster-ingress-operator

The Cluster Ingress Operator manages highly available network ingress for OpenShift
Go
59
star
67

oc-mirror

Lifecycle manager for internet-disconnected OpenShift environments
Go
58
star
68

openshift-client-python

A python library for interacting with OpenShift via the OpenShift client binary.
Python
56
star
69

cincinnati-graph-data

Release node and upgrade edge metadata for Cincinnati graphs.
Python
56
star
70

router

Ingress controller for OpenShift
Go
55
star
71

cluster-image-registry-operator

The image registry operator installs+maintains the internal registry on a cluster
Go
55
star
72

cluster-operator

Go
52
star
73

local-storage-operator

Operator for local storage
Go
51
star
74

tektoncd-pipeline-operator

tektoncd-pipeline operator for Kubernetes to manage installation, updation and uninstallation of tekton-cd pipelines.
Go
51
star
75

pipelines-catalog

A repository for OpenShift Pipelines tasks
Python
50
star
76

openshift-azure

Azure Red Hat Openshift
Go
49
star
77

cloud-credential-operator

Manage cloud provider credentials as Kubernetes CRDs
Go
48
star
78

assisted-installer

Go
48
star
79

verification-tests

Blackbox test suite for OpenShift.
Gherkin
47
star
80

community.okd

OKD/Openshift collection for Ansible
Python
45
star
81

service-idler

A controller for idling and unidling groups of scalable Kubernetes resources
Go
44
star
82

ruby-hello-world

Hello world ruby sample for OpenShift v3
Ruby
44
star
83

managed-cluster-config

Static deployable artifacts for managed OSD clusters
HTML
42
star
84

cluster-authentication-operator

OpenShift operator for the top level Authentication and OAuth configs.
Go
41
star
85

image-registry

OpenShift cluster image registry
Go
40
star
86

console-operator

The console operator installs and maintains the web console on a cluster
Go
38
star
87

sriov-cni

An SRIOV CNI plugin
Go
38
star
88

openshift-extras

Unofficial tools for use with OpenShift
Ruby
38
star
89

runbooks

Runbooks for Alerts on OCP
Shell
37
star
90

cluster-kube-controller-manager-operator

The kube-controller-manager operator installs and maintains the kube-controller-manager on a cluster
Go
37
star
91

cluster-kube-descheduler-operator

An operator to run descheduler on OpenShift.
Go
37
star
92

assisted-test-infra

Python
36
star
93

windows-machine-config-operator

Windows MCO for OpenShift that handles addition of Windows nodes to the cluster
Go
36
star
94

insights-operator

Go
36
star
95

service-ca-operator

Controller to mint and manage serving certificates for Kubernetes services
Go
36
star
96

openshift-jee-sample

A sample app to be deployed on openshift environments
HTML
35
star
97

cluster-autoscaler-operator

Manage Kubernetes cluster-autoscaler deployments
Go
35
star
98

certman-operator

Operator to Manage Let's Encrypt certificates for OpenShift Clusters
Go
35
star
99

python-interface

Python
35
star
100

cluster-dns-operator

The Cluster DNS Operator manages cluster DNS services for OpenShift
Go
35
star