• Stars
    star
    2,379
  • Rank 19,209 (Top 0.4 %)
  • Language
    Go
  • License
    Apache License 2.0
  • Created over 10 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A tool for building artifacts from source and injecting into container images

Go Reference License

Source-To-Image (S2I)

Overview

Source-to-Image (S2I) is a toolkit and workflow for building reproducible container images from source code. S2I produces ready-to-run images by injecting source code into a container image and letting the container prepare that source code for execution. By creating self-assembling builder images, you can version and control your build environments exactly like you use container images to version your runtime environments.

For a deep dive on S2I you can view this presentation.

Want to try it right now? Download the latest release and run:

$ s2i build https://github.com/sclorg/django-ex centos/python-35-centos7 hello-python
$ docker run -p 8080:8080 hello-python

Now browse to http://localhost:8080 to see the running application.

You've just built and run a new container image from source code in a git repository, no Dockerfile necessary.

How Source-to-Image works

For a dynamic language like Ruby, the build-time and run-time environments are typically the same. Starting with a builder image that describes this environment - with Ruby, Bundler, Rake, Apache, GCC, and other packages needed to set up and run a Ruby application installed - source-to-image performs the following steps:

  1. Start a container from the builder image with the application source injected into a known directory
  2. The container process transforms that source code into the appropriate runnable setup - in this case, by installing dependencies with Bundler and moving the source code into a directory where Apache has been preconfigured to look for the Ruby config.ru file.
  3. Commit the new container and set the image entrypoint to be a script (provided by the builder image) that will start Apache to host the Ruby application.

For compiled languages like C, C++, Go, or Java, the dependencies necessary for compilation might dramatically outweigh the size of the actual runtime artifacts. To keep runtime images slim, S2I enables a multiple-step build processes, where a binary artifact such as an executable or Java WAR file is created in the first builder image, extracted, and injected into a second runtime image that simply places the executable in the correct location for execution.

For example, to create a reproducible build pipeline for Tomcat (the popular Java webserver) and Maven:

  1. Create a builder image containing OpenJDK and Tomcat that expects to have a WAR file injected
  2. Create a second image that layers on top of the first image Maven and any other standard dependencies, and expects to have a Maven project injected
  3. Invoke source-to-image using the Java application source and the Maven image to create the desired application WAR
  4. Invoke source-to-image a second time using the WAR file from the previous step and the initial Tomcat image to create the runtime image

By placing our build logic inside of images, and by combining the images into multiple steps, we can keep our runtime environment close to our build environment (same JDK, same Tomcat JARs) without requiring build tools to be deployed to production.

Goals

Reproducibility

Allow build environments to be tightly versioned by encapsulating them within a container image and defining a simple interface (injected source code) for callers. Reproducible builds are a key requirement to enabling security updates and continuous integration in containerized infrastructure, and builder images help ensure repeatability as well as the ability to swap runtimes.

Flexibility

Any existing build system that can run on Linux can be run inside of a container, and each individual builder can also be part of a larger pipeline. In addition, the scripts that process the application source code can be injected into the builder image, allowing authors to adapt existing images to enable source handling.

Speed

Instead of building multiple layers in a single Dockerfile, S2I encourages authors to represent an application in a single image layer. This saves time during creation and deployment, and allows for better control over the output of the final image.

Security

Dockerfiles are run without many of the normal operational controls of containers, usually running as root and having access to the container network. S2I can be used to control what permissions and privileges are available to the builder image since the build is launched in a single container. In concert with platforms like OpenShift, source-to-image can enable admins to tightly control what privileges developers have at build time.

Anatomy of a builder image

Creating builder images is easy. s2i looks for you to supply the following scripts to use with an image:

  1. assemble - builds and/or deploys the source
  2. run- runs the assembled artifacts
  3. save-artifacts (optional) - captures the artifacts from a previous build into the next incremental build
  4. usage (optional) - displays builder image usage information

Additionally for the best user experience and optimized s2i operation we suggest images to have /bin/sh and tar commands available.

See a practical tutorial on how to create a builder image and read a detailed description of the requirements and scripts along with examples of builder images.

Build workflow

The s2i build workflow is:

  1. s2i creates a container based on the build image and passes it a tar file that contains:
    1. The application source in src, excluding any files selected by .s2iignore
    2. The build artifacts in artifacts (if applicable - see incremental builds)
  2. s2i sets the environment variables from .s2i/environment (optional)
  3. s2i starts the container and runs its assemble script
  4. s2i waits for the container to finish
  5. s2i commits the container, setting the CMD for the output image to be the run script and tagging the image with the name provided.

Filtering the contents of the source tree is possible if the user supplies a .s2iignore file in the root directory of the source repository, where .s2iignore contains regular expressions that capture the set of files and directories you want filtered from the image s2i produces.

Specifically:

  1. Specify one rule per line, with each line terminating in \n.
  2. Filepaths are appended to the absolute path of the root of the source tree (either the local directory supplied, or the target destination of the clone of the remote source repository s2i creates).
  3. Wildcards and globbing (file name expansion) leverage Go's filepath.Match and filepath.Glob functions.
  4. Search is not recursive. Subdirectory paths must be specified (though wildcards and regular expressions can be used in the subdirectory specifications).
  5. If the first character is the # character, the line is treated as a comment.
  6. If the first character is the !, the rule is an exception rule, and can undo candidates selected for filtering by prior rules (but only prior rules).

Here are some examples to help illustrate:

With specifying subdirectories, the */temp* rule prevents the filtering of any files starting with temp that are in any subdirectory that is immediately (or one level) below the root directory. And the */*/temp* rule prevents the filtering of any files starting with temp that are in any subdirectory that is two levels below the root directory.

Next, to illustrate exception rules, first consider the following example snippet of a .s2iignore file:

*.md
!README.md

With this exception rule example, README.md will not be filtered, and remain in the image s2i produces. However, with this snippet:

!README.md
*.md

README.md, if filtered by any prior rules, but then put back in by !README.md, would be filtered, and not part of the resulting image s2i produces. Since *.md follows !README.md, *.md takes precedence.

Users can also set extra environment variables in the application source code. They are passed to the build, and the assemble script consumes them. All environment variables are also present in the output application image. These variables are defined in the .s2i/environment file inside the application sources. The format of this file is a simple key-value, for example:

FOO=bar

In this case, the value of FOO environment variable will be set to bar.

Using ONBUILD images

In case you want to use one of the official Dockerfile language stack images for your build you don't have do anything extra. S2I is capable of recognizing the container image with ONBUILD instructions and choosing the OnBuild strategy. This strategy will trigger all ONBUILD instructions and execute the assemble script (if it exists) as the last instruction.

Since the ONBUILD images usually don't provide any entrypoint, in order to use this build strategy you will have to provide one. You can either include the 'run', 'start' or 'execute' script in your application source root folder or you can specify a valid S2I script URL and the 'run' script will be fetched and set as an entrypoint in that case.

Incremental builds

s2i automatically detects:

  • Whether a builder image is compatible with incremental building
  • Whether a previous image exists, with the same name as the output name for this build

If a save-artifacts script exists, a prior image already exists, and the --incremental=true option is used, the workflow is as follows:

  1. s2i creates a new container image from the prior build image
  2. s2i runs save-artifacts in this container - this script is responsible for streaming out a tar of the artifacts to stdout
  3. s2i builds the new output image:
    1. The artifacts from the previous build will be in the artifacts directory of the tar passed to the build
    2. The build image's assemble script is responsible for detecting and using the build artifacts

NOTE: The save-artifacts script is responsible for streaming out dependencies in a tar file.

Dependencies

  1. docker >= 1.6
  2. Go >= 1.7.1
  3. (optional) Git

Installation

Using go install

You can install the s2i binary using go install which will download the source-to-image code to your Go module cache, build the s2i binary, and install it into your $GOBIN, or $GOPATH/bin if $GOBIN is not set, or $HOME/go/bin if the GOPATH environment variable is also not set.

$ go install github.com/openshift/source-to-image/cmd/s2i@latest

For Mac

You can either follow the installation instructions for Linux (and use the darwin-amd64 link) or you can just install source-to-image with Homebrew:

$ brew install source-to-image

For Linux

Go to the releases page and download the correct distribution for your machine. Choose either the linux-386 or the linux-amd64 links for 32 and 64-bit, respectively.

Unpack the downloaded tar with

$ tar -xvzf release.tar.gz.

You should now see an executable called s2i. Either add the location of s2i to your PATH environment variable, or move it to a pre-existing directory in your PATH. For example,

# cp /path/to/s2i /usr/local/bin

will work with most setups.

For Windows

Download the latest 64-bit Windows release. Extract the zip file through a file browser. Add the extracted directory to your PATH. You can now use s2i from the command line.

Note: We have had some reports of Windows Defender falsely alerting reporting that the Windows binaries contain "Trojan:Win32/Azden.A!cl". This appears to be a common false alert for other applications as well.

From source

Assuming Go, Git, and Docker are installed and configured, execute the following commands:

$ git clone https://github.com/openshift/source-to-image
$ cd source-to-image
$ export PATH=${PATH}:`pwd`/_output/local/bin/`go env GOOS`/`go env GOHOSTARCH`/
$ ./hack/build-go.sh

Security

Since the s2i command uses the Docker client library, it has to run in the same security context as the docker command. For some systems, it is enough to add yourself into the 'docker' group to be able to work with Docker as 'non-root'. In the latest versions of Fedora/RHEL, it is recommended to use the sudo command as this way is more auditable and secure.

If you are using the sudo docker command already, then you will have to also use sudo s2i to give S2I permission to work with Docker directly.

Be aware that being a member of the 'docker' group effectively grants root access, as described here.

Getting Started

You can start using s2i right away (see releases) with the following test sources and publicly available images:

$ s2i build https://github.com/openshift/ruby-hello-world registry.redhat.io/ubi8/ruby-27 test-ruby-app
$ docker run --rm -i -p :8080 -t test-ruby-app
$ s2i build --ref=10.x --context-dir=helloworld https://github.com/wildfly/quickstart openshift/wildfly-101-centos7 test-jee-app
$ docker run --rm -i -p 8080:8080 -t test-jee-app

Want to know more? Read the following resources:

More Repositories

1

origin

Conformance test suite for OpenShift
Go
8,372
star
2

openshift-ansible

Install and config an OpenShift 3.x cluster
Python
2,136
star
3

osin

Golang OAuth2 server library
Go
1,832
star
4

installer

Install an OpenShift 4.x cluster
1,312
star
5

okd

The self-managing, auto-upgrading, Kubernetes distribution for everyone
HCL
1,276
star
6

origin-server

OpenShift 2 (deprecated)
Ruby
885
star
7

openshift-docs

OpenShift 3 and 4 product and community documentation
HTML
687
star
8

microshift

A small form factor OpenShift/Kubernetes optimized for edge computing
Go
528
star
9

geard

geard is no longer maintained - see OpenShift 3 and Kubernetes
Go
407
star
10

hypershift

Hyperscale OpenShift - clusters with hosted control planes
Go
347
star
11

console

OpenShift Cluster Console UI
TypeScript
334
star
12

openshift-ansible-contrib

Additional roles and playbooks for OpenShift installation and management
Python
284
star
13

training

Shell
280
star
14

pipelines-tutorial

A step-by-step tutorial showing OpenShift Pipelines
Shell
272
star
15

jenkins

Shell
260
star
16

release

Release tooling for OpenShift
Shell
229
star
17

ansible-service-broker

Ansible Service Broker
Go
226
star
18

hive

API driven OpenShift cluster provisioning and management
Go
222
star
19

rhc

OpenShift 2 Client Tools (deprecated)
Ruby
220
star
20

machine-config-operator

Go
218
star
21

jenkins-client-plugin

Java
218
star
22

cluster-monitoring-operator

Manage the OpenShift monitoring stack
Go
218
star
23

openshift-restclient-python

Python client for the OpenShift API
Python
205
star
24

library

Examples and Components for deploying into OpenShift
Go
163
star
25

openshift-tools

A public repository of scripts used by OpenShift Operations for various purposes
Python
161
star
26

enhancements

Enhancements tracking repository for OKD
Go
156
star
27

generic-admission-server

A library for writing admission webhooks based on k8s.io/apiserver
Go
153
star
28

oc

The OpenShift Command Line, part of OKD
Go
150
star
29

origin-aggregated-logging

JavaScript
142
star
30

machine-api-operator

Machine API operator
Go
137
star
31

origin-web-console

Web Console for the OpenShift Application Platform
JavaScript
123
star
32

svt

Shell
117
star
33

imagebuilder

Builds Dockerfile using the Docker client (with squashing! and secrets!)
Go
116
star
34

client-go

Go client for OpenShift
Shell
104
star
35

compliance-operator

Operator providing OpenShift cluster compliance checks
Go
100
star
36

telemeter

Prometheus push federation
Go
96
star
37

assisted-service

Go
90
star
38

cluster-network-operator

Create and manage cluster networking configuration
Go
85
star
39

cincinnati

Rust
84
star
40

vagrant-openshift

Ruby
83
star
41

cluster-logging-operator

Operator to support logging subsystem of OpenShift
Go
83
star
42

openshift-pep

Public Project Enhancement Proposals for the OpenShift product. Tracks and maintains high level architectural documents related to future OpenShift changes.
82
star
43

jenkins-plugin

Java
81
star
44

sriov-network-operator

SR-IOV Network Operator
Go
81
star
45

origin-metrics

Shell
78
star
46

openshift-restclient-java

Java
78
star
47

must-gather

A client tool for gathering information about an operator managed component.
Shell
77
star
48

api

Canonical location of the OpenShift API definition.
Go
75
star
49

sdn

Go
73
star
50

cluster-version-operator

Go
72
star
51

library-go

Helpers for going from apis and clients to useful runtime constructs
Go
72
star
52

os

Shell
71
star
53

cluster-etcd-operator

Operator to manage the lifecycle of the etcd members of an OpenShift cluster
Go
70
star
54

cluster-node-tuning-operator

Manage node-level tuning by orchestrating the tuned daemon.
Go
69
star
55

openshift-sdn

Go
69
star
56

openshift-origin-design

Design repository for all things OpenShift
SCSS
68
star
57

autoheal

Autoheals based on monitoring alerts
Go
66
star
58

elasticsearch-operator

Go
66
star
59

cluster-kube-apiserver-operator

The kube-apiserver operator installs and maintains the kube-apiserver on a cluster
Go
65
star
60

oadp-operator

OADP Operator
Go
64
star
61

openshift-java-client

Java Client for the OpenShift REST API
Java
63
star
62

rosa

Go
61
star
63

ovn-kubernetes

Kubernetes integration for OVN
Go
61
star
64

community

Community organizational documentations and process for OKD
61
star
65

federation-dev

Dev preview of federation
Shell
59
star
66

cluster-ingress-operator

The Cluster Ingress Operator manages highly available network ingress for OpenShift
Go
59
star
67

oc-mirror

Lifecycle manager for internet-disconnected OpenShift environments
Go
58
star
68

openshift-client-python

A python library for interacting with OpenShift via the OpenShift client binary.
Python
56
star
69

cincinnati-graph-data

Release node and upgrade edge metadata for Cincinnati graphs.
Python
56
star
70

router

Ingress controller for OpenShift
Go
55
star
71

cluster-image-registry-operator

The image registry operator installs+maintains the internal registry on a cluster
Go
55
star
72

cluster-operator

Go
52
star
73

local-storage-operator

Operator for local storage
Go
51
star
74

tektoncd-pipeline-operator

tektoncd-pipeline operator for Kubernetes to manage installation, updation and uninstallation of tekton-cd pipelines.
Go
51
star
75

pipelines-catalog

A repository for OpenShift Pipelines tasks
Python
50
star
76

openshift-azure

Azure Red Hat Openshift
Go
49
star
77

cloud-credential-operator

Manage cloud provider credentials as Kubernetes CRDs
Go
48
star
78

assisted-installer

Go
48
star
79

verification-tests

Blackbox test suite for OpenShift.
Gherkin
47
star
80

community.okd

OKD/Openshift collection for Ansible
Python
45
star
81

service-idler

A controller for idling and unidling groups of scalable Kubernetes resources
Go
44
star
82

ruby-hello-world

Hello world ruby sample for OpenShift v3
Ruby
44
star
83

managed-cluster-config

Static deployable artifacts for managed OSD clusters
HTML
42
star
84

cluster-authentication-operator

OpenShift operator for the top level Authentication and OAuth configs.
Go
41
star
85

image-registry

OpenShift cluster image registry
Go
40
star
86

console-operator

The console operator installs and maintains the web console on a cluster
Go
38
star
87

sriov-cni

An SRIOV CNI plugin
Go
38
star
88

openshift-extras

Unofficial tools for use with OpenShift
Ruby
38
star
89

runbooks

Runbooks for Alerts on OCP
Shell
37
star
90

cluster-kube-controller-manager-operator

The kube-controller-manager operator installs and maintains the kube-controller-manager on a cluster
Go
37
star
91

cluster-kube-descheduler-operator

An operator to run descheduler on OpenShift.
Go
37
star
92

assisted-test-infra

Python
36
star
93

windows-machine-config-operator

Windows MCO for OpenShift that handles addition of Windows nodes to the cluster
Go
36
star
94

insights-operator

Go
36
star
95

service-ca-operator

Controller to mint and manage serving certificates for Kubernetes services
Go
36
star
96

openshift-jee-sample

A sample app to be deployed on openshift environments
HTML
35
star
97

cluster-autoscaler-operator

Manage Kubernetes cluster-autoscaler deployments
Go
35
star
98

certman-operator

Operator to Manage Let's Encrypt certificates for OpenShift Clusters
Go
35
star
99

python-interface

Python
35
star
100

cluster-dns-operator

The Cluster DNS Operator manages cluster DNS services for OpenShift
Go
35
star