• Stars
    star
    136
  • Rank 267,670 (Top 6 %)
  • Language
    Shell
  • License
    Apache License 2.0
  • Created over 9 years ago
  • Updated about 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A place to write templates, docs etc. for deploying OpenShift on OpenStack.

OpenShift on OpenStack

Maintenance Status

This project is no longer being developed or maintained by its original authors.

The official OpenShift installer now supports various cloud providers including OpenStack so a lot of the development effort has moved there:

We recommend you take a look at it.

About

A collection of documentation, Heat templates, configuration and everything else that’s necessary to deploy OpenShift on OpenStack.

This template uses Heat to create the OpenStack infrastructure components, then calls the OpenShift Ansible installer playbooks to install and configure OpenShift on the VMs.

Architecture

All of the OpenShift VMs will share a private network. This network is connected to the public network by a router.

The deployed OpenShift environment is composed of a replicated set of OpenShift master VMs fronted by a load_balancer. This provides both a single point of access and some HA capabilities. The applications run on one or more OpenShift node VMs. These are connected by a private software defined network (SDN) which can be implemented either with OpenVSwitch or Flannel.

A bastion server is used to control the host and service configuration. The host and service configuration is run using Ansible playbooks executed from the bastion host.

Bastion server, master nodes and infra nodes is also given a floating IP address on the public network. This provides direct access to the bastion server from where you can access all nodes by SSH. Master nodes and infra nodes have floating IP assigned to make sure these nodes are accessible when an external loadbalancer is used for accessing OpenShift services.

All of the OpenShift hosts (master, infra and node) have block storage for Docker images and containers provided by Cinder. OpenShift will run a local Docker registry, also backed by Cinder block storage. Finally all nodes will have access to Cinder volumes which can be created by OpenStack users and mounted into containers by Kubernetes.

architecture

Prerequisites

  1. OpenStack version Juno or later with the Heat, Neutron, Ceilometer, Aodh (Mitaka or later) services running:

    • heat-api-cfn service - used for passing heat metadata to nova instances

    • Neutron LBaaS service (optional) - used for loadbalancing requests in HA mode, if this service is not available, you can deploy dedicated loadbalancer node, see Select Loadbalancer Type

    • Ceilometer services (optional) - used when autoscaling is enabled

  2. ServerGroupAntiAffinityFilter enabled in Nova service (optionally ServerGroupAffinityFilter when using all-in-one OpenStack environment)

  3. CentOS 7.2 cloud image (we leverage cloud-init) loaded in Glance for OpenShift Origin Deployments. RHEL_ 7.2 cloud image if doing Atomic Enterprise or OpenShift Container Platform. Make sure to use official images to avoid unexpected issues during deployment (e.g. a custom firewall may block OpenShift inter-node communication).

  4. An SSH keypair loaded into Nova

  5. A (Neutron) network with a pool of floating IP addresses available

CentOS and RHEL are the only tested distros for now.

DNS Server

The OpenShift installer requires that all nodes be reachable via their hostnames. Since OpenStack does not currently provide an internal name resolution, this needs to be done with an external DNS service that all nodes use via the dns_nameserver parameter.

In a production deployment this would be your existing DNS, but if you don’t have the ability to update it to add new name records, you will have to deploy one yourself.

We have provided a separate repository that can deploy a DNS server suitable for OpenShift:

Note
If your DNS supports dynamic updates via RFC 2136, you can pass the update key to the Heat stack and all nodes will register themselves as they come up. Otherwise, you will have to update your DNS records manually.

Red Hat Software Repositories

When installing OpenShift Container Platform on RHEL the OpenShift and OpenStack repositories must be enabled, along with several common repositories. These repositories must be available under the subscription account used for installation.

Table 1. Required Repositories for RHEL installation
Repo Name Purpose

rhel-7-server-rpms

Standard RHEL Server RPMs

rhel-7-server-extras-rpms

Supporting RPMs

rhel-7-server-optional-rpms

Supporting RPMs

rhel-7-server-openstack-10-rpms

OpenStack client and data collection RPMs

rhel-7-server-ose-3.5-rpms

OpenShift Container Platform RPMs

rhel-7-fast-datapath-rpms

Required for OSP 3.5+ and OVS 2.6+

Creating an All-In-One Demo Environment

Following steps can be used to setup all-in-one testing/developer environment:

# OpenStack does not run with NetworkManager
systemctl stop NetworkManager
systemctl disable NetworkManager

# The Packstack Installer is not supported for production but will work
# for demonstrations
yum -y install openstack-packstack libvirt git

# Add room for images if /varlib is too small
mv /var/lib/libvirt/images /home
ln -s /home/images /var/lib/libvirt/images

# Install OpenStack demonstrator with no real security
#   This produces the keystonerc_admin file used below
packstack --allinone --provision-all-in-one-ovs-bridge=y \
  --os-heat-install=y --os-heat-cfn-install=y \
  --os-neutron-lbaas-install=y \
  --keystone-admin-passwd=password --keystone-demo-passwd=password

# Retrieve the Heat templates for OpenShift
git clone https://github.com/redhat-openstack/openshift-on-openstack.git

# Retrieve a compatible image for the OpenShift VMs
curl -O http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2

# Set access environment parameters for the new OpenStack service
source keystonerc_admin

# Load the VM image into the store and make it available for creating VMs
glance image-create --name centos72 --is-public True \
  --disk-format qcow2 --container-format bare \
  --file CentOS-7-x86_64-GenericCloud.qcow2
# For newer versions of glance clients, substitute "--is-public True" with "--visibility public"

# Install the current user's SSH key for access to VMs
nova keypair-add --pub-key ~/.ssh/id_rsa.pub default

Deployment

You can pass all environment variables to heat on command line. However, two environment files are provided as examples.

  • env_origin.yaml is an example of the variables to deploy an OpenShift Origin 3 environment.

  • env_aop.yaml is an example of the variables to deploy an Atomic Enterprise or OpenShift Container Platform 3 environment. Note deployment type should be openshift-enterprise for OpenShift or atomic-enterprise for Atomic Enterprise. Also, a valid RHN subscription is required for deployment.

Here is a sample of environment file which uses a subset of parameters which can be set by the user to configure OpenShift deployment. All configurable parameters including description can be found in the parameters section in the main template. Assuming your external network is called public, your SSH key is default, your CentOS 7.2 image is centos72 and your domain name is example.com, this is how you deploy OpenShift Origin:

cat << EOF > openshift_parameters.yaml
parameters:
   # Use OpenShift Origin (vs OpenShift Container Platform)
   deployment_type: origin

   # set SSH access to VMs
   ssh_user: centos
   ssh_key_name: default

   # Set the image type and size for the VMs
   bastion_image: centos72
   bastion_flavor: m1.medium
   master_image: centos72
   master_flavor: m1.medium
   infra_image: centos72
   infra_flavor: m1.medium
   node_image: centos72
   node_flavor: m1.medium
   loadbalancer_image: centos72
   loadbalancer_flavor: m1.medium

   # Set an existing network for inbound and outbound traffic
   external_network: public
   dns_nameserver: 8.8.4.4,8.8.8.8

   # Define the host name templates for master and nodes
   domain_name: "example.com"
   master_hostname: "origin-master"
   node_hostname: "origin-node"

   # Allocate additional space for Docker images
   master_docker_volume_size_gb: 25
   infra_docker_volume_size_gb: 25
   node_docker_volume_size_gb: 25

   # Specify the (initial) number of nodes to deploy
   node_count: 2

   # Add auxiliary services: OpenStack router and internal Docker registry
   deploy_router: False
   deploy_registry: False

   # If using RHEL image, add RHN credentials for RPM installation on VMs
   rhn_username: ""
   rhn_password: ""
   rhn_pool: '' # OPTIONAL

   # Currently Ansible 2.1 is not supported so add these parameters as a workaround
   openshift_ansible_git_url: https://github.com/openshift/openshift-ansible.git
   openshift_ansible_git_rev: master

resource_registry:
  # use neutron LBaaS
  OOShift::LoadBalancer: openshift-on-openstack/loadbalancer_neutron.yaml
  # use openshift SDN
  OOShift::ContainerPort: openshift-on-openstack/sdn_openshift_sdn.yaml
  # enable ipfailover for router setup
  OOShift::IPFailover: openshift-on-openstack/ipfailover_keepalived.yaml
  # create dedicated volume for docker storage
  OOShift::DockerVolume: openshift-on-openstack/volume_docker.yaml
  OOShift::DockerVolumeAttachment: openshift-on-openstack/volume_attachment_docker.yaml
  # use ephemeral cinder volume for openshift registry
  OOShift::RegistryVolume: openshift-on-openstack/registry_ephemeral.yaml
EOF
# retrieve the Heat template (if you haven't yet)
git clone https://github.com/redhat-openstack/openshift-on-openstack.git

After this you can deploy using the heat command

# create a stack named 'my-openshift'
heat stack-create my-openshift -t 180 \
  -e openshift_parameters.yaml \
  -f openshift-on-openstack/openshift.yaml

or using the generic OpenStack client

# create a stack named 'my-openshift'
openstack stack create --timeout 180 \
  -e openshift_parameters.yaml \
  -t openshift-on-openstack/openshift.yaml my-openshift

The node_count parameter specifies how many compute nodes you want to deploy. In the example above, we will deploy one master, one infra node and two compute nodes.

The templates will report stack completion back to Heat only when the whole OpenShift setup is finished.

Debugging

Sometimes it’s necessary to find out why a stack was not deployed as expected. Debugging helps you find the root cause of the issue.

OpenStack Integration

OpenShift on OpenStack takes advantage of the cloud provider to offer features such as dynamic storage to the OpenShift users. Auto scaling also requires communication with the OpenStack service. You must provide a set of OpenStack credentials so that OpenShift and the heat scaling mechanism can work correctly.

These are the same values used to create the Heat stack.

Sample OSP Credentials - osp_credentials.yaml
---
parameters:
  os_auth_url: http://10.0.x.x:5000/v2.0
  os_username: <username>
  os_password: <password>
  os_region_name: regionOne
  os_tenant_name: <tenant name>
  os_domain_name: <domain name>

When invoking the stack creation, include this by adding -e osp_credentials.yaml to the command.

OpenStack with SSL/TLS

If your OpenStack service is encrypted with SSL/TLS, you will need to provide the CA certificate so that the communication channel can be validated.

The CA certificate is provided as a literal string copy of contents of the CA certificate file, and can be included in an additional environment file:

CA Certificate Parameter File ca_certificates.yaml
---
parameters:
  ca_cert: |
    -----BEGIN CERTIFICATE-----
   ...
   -----END CERTIFICATE-----

When invoking the stack creation, includ this by adding -e ca_certificates.yaml.

You can include multiple CA certificate strings and all will be imported into the CA list on all instances.

Multiple Master Nodes

You can deploy OpenShift with multiple master hosts using the 'native' HA method (see https://docs.openshift.org/latest/install_config/install/advanced_install.html#multiple-masters for details) by increasing number of master nodes. This can be done by setting master_count heat parameter:

heat stack-create my-openshift \
   -e openshift_parameters.yaml \
   -P master_count=3 \
   -f openshift-on-openstack/openshift.yaml

Three master nodes will be deployed. Console and API URLs point to the loadbalancer server which distributes requests across all three nodes. You can get the URLs from Heat by running heat output-show my-openshift console_url and heat output-show my-openshift api_url.

Multiple Infra Nodes

You can deploy OpenShift with multiple infra hosts. Then OpenShift router is deployed on each of infra node (only if -P deploy_router=true is used) and router requests are load balanced by either dedicated or neutron loadbalancer. This can be done by setting infra_count heat parameter:

heat stack-create my-openshift \
   -e openshift_parameters.yaml \
   -P infra_count=2 \
   -P deploy_router=true \
   -f openshift-on-openstack/openshift.yaml

Two infra nodes will be deployed. Loadbalancer server distributes requests on ports 80 and 443 across both nodes.

Select Loadbalancer Type

When deploying multiple master nodes, both access to the nodes and OpenShift router pods (which run on infra nodes) have to be loadbalanced. openshift-on-openstack provides multiple options for setting up loadbalancing:

  • Neutron LBaaS - this loadbalancer is used by default. Neutron loadbalancer serviceis used for loadbalancing console/api requests to master nodes. At the moment OpenShift router requests are not loadbalanced and an external loadbalancer has to be used for it. This is default option, but can be set explicitly by including -e openshift-on-openstack/env_loadbalancer_neutron.yaml when creating the stack. By default, this mode uses IP failover.

  • External loadbalancer - a user is expected to set its own loadbalancer both for master nodes and OpenShift routers. This is suggested type for production. To select this type include -e openshift-on-openstack/env_loadbalancer_external.yaml when creating the stack and also set lb_hostname parameter to point to the loadbalancer’s fully qualified domain name. Once stack creation is finished you can set your external loadbalancer with the list of created master nodes.

  • Dedicated loadbalancer node - a dedicated node is created during stack creation and HAProxy loadbalancer is configured on it. Both console/API and OpenSHift router requests are load balanced by this dedicated node. This type is useful for demo/testing purposes only because HA is not assured for the single loadbalancer. To select this type include -e openshift-on-openstack/env_loadbalancer_dedicated.yaml when creating the stack. node.

  • None - if only single master node is deployed, it’s possible to skip loadbalancer creation, then all master node requests and OpenShift router requests point to the single master node. To select this type include -e openshift-on-openstack/env_loadbalancer_none.yaml when creating the stack. By default, this mode uses IP failover.

Select SDN Type

By default, OpenShift is deployed with OpenShift-SDN. When used with OpenStack Neutron with GRE or VXLAN tunnels, packets are encapsulated twice which can have an impact on performances. Those Heat templates allow using Flannel instead of openshift-sdn, with the host-gw backend to avoid the double encapsulation. To do so, you need to include the env_flannel.yaml environment file when you create the stack:

heat stack-create my_openshift \
   -e openshift_parameters.yaml \
   -f openshift-on-openstack/openshift.yaml \
   -e openshift-on-openstack/env_flannel.yaml

To use this feature, the Neutron port_security extension driver needs to be enabled. To do so and when using the ML2 driver, edit the file /etc/neutron/plugins/ml2/ml2_conf.ini and make sure it contains the line:

extension_drivers = port_security

Note that this feature is still in experimental mode.

LDAP authentication

You can use an external LDAP server to authenticate OpenShift users. Update parameters in env_ldap.yaml file and include this environment file when you create the stack.

Example of env_ldap.yaml using an Active Directory server:

LDAP parameter file `env_ldap.yaml
parameter_defaults:
   ldap_hostname: <ldap hostname>
   ldap_ip: <ip of ldap server>
   ldap_url: ldap://<ldap hostname>:389/CN=Users,DC=example,DC=openshift,DC=com?sAMAccountName
   ldap_bind_dn: CN=Administrator,CN=Users,DC=example,DC=openshift,DC=com?sAMAccountName
   ldap_bind_password: <admin password>
heat stack-create my-openshift \
  -e openshift_parameters.yaml \
  -e openshift-on-openstack/env_ldap.yaml \
  -f openshift-on-openstack/openshift.yaml

If your LDAP service uses SSL, you will also need to add a CA Certficate for the LDAP communications.

Using Custom Yum Repositories

You can set additional Yum repositories on deployed nodes by passing extra_repository_urls parameter which contains list of Yum repository URLs delimited by comma:

heat stack-create my-openshift \
  -e openshift_parameters.yaml \
  -P extra_repository_urls=http://server/my/own/repo1.repo,http://server/my/own/repo2.repo
  -f openshift-on-openstack/openshift.yaml

Using Custom Docker Respositories

You can set additional Docker repositories on deployed nodes by passing extra_docker_repository_urls parameter which contains list of docker repository URLs delimited by comma, if a repository is insecure you can use #insecure suffix for the repository:

heat stack-create my-openshift \
  -e openshift_parameters.yaml \
  -P extra_docker_repository_urls='user.docker.example.com,custom.user.example.com#insecure'
  -f openshift-on-openstack/openshift.yaml

Using Persistent Cinder Volume for Docker Registry

When deploying OpenShift registry (-P deploy_registry=true) you can use either an ephemeral or persistent Cinder volume. Ephemeral volume is used by default, the volume is automatically created when creating the stack and is also deleted when deleting the stack. Alternatively you can use an existing Cinder volume by including the env_registry_persistent.yaml environment file and registry_volume_id when you create the stack:

heat stack-create my-openshift \
  -e openshift_parameters.yaml \
  -f openshift-on-openstack/openshift.yaml \
  -e openshift-on-openstack/env_registry_persistent.yaml \
  -P registry_volume_id=<cinder_volume_id>

Persistent volume is not formatted when creating the stack, if you have a new unformatted volume you can enforce formatting by passing -P prepare_registry=true.

Accessing OpenShift

From user point of view there are two entry points into the deployed OpenShift:

  • OpenShift console and API URLs: these URLs usually point to the loadbalancer host and can be obtained by:

heat output-show my-openshift console_url
heat output-show my-openshift api_url
  • Router IP: the IP address which application OpenShift router service listens on. This IP will be used for setting wildcard DNS for .apps.<domain> subdomain. The IP can be obtained by:

heat output-show my-openshift router_ip

Setting DNS

To make sure that console and API URLs resolving works properly, you have to create a DNS record for the hostname used in console_url and api_url URLs. The floating IP address can be obtained by:

heat output-show my-openshift loadbalancer_ip

For example if console_url is https://default32-lb.example.com:8443/console/ and loadbalancer_ip is 172.24.4.166 there should be a DNS record for domain example.com:

default32-lb  IN A  172.24.4.166

If OpenShift router was deployed (-P deploy_router=true) you also may want to make sure that wildcard DNS is set for application subdomain. For example if used domain is example.com and router_ip is 172.24.4.168 there should be a DNS record for domain example.com:

*.cloudapps.example.com. 300 IN  A 172.24.4.168
Note

The above DNS records should be set on the DNS server authoritative for the domain used in OpenShift cluster (example.com in the example above).

Dynamic DNS Updates

If your DNS servers support dynamic updates (as defined in RFC 2136), you can pass the update key in the dns_update_key parameter and each node will register its internal IP address to all the DNS servers in the dns_nameserver list.

In addition, if you use the dedicated load balancer, the API and wildcard entries will be created as well. Otherwise, you will need to set them manually.

Retrieving the OpenShift CA certificate

You can retrieve the CA certificate that was generated during the OpenShift installation by running

heat output-show --format=raw my-openshift ca_cert > ca.crt
heat output-show --format=raw my-openshift ca_key > ca.key

Container and volumes quotas

OpenShift has preliminary support for local emptyDir volume quotas. You can set the volume_quota parameter to a resource quantity representing the desired quota per FSGroup.

You can set quota on the maximum size of the containers using the container_quota parameter in GB.

Example:

   volume_quota: 10
   container_quota: 20

Disabling Cinder volumes for Docker storage

By default, the Heat templates create a Cinder volume per OpenShift node to host containers. This can be disabled by including both volume_noop.yaml and volume_attachment_noop.yaml in your environment file:

resource_registry: …​ OOShift::DockerVolume: volume_noop.yaml OOShift::DockerVolumeAttachment: volume_attachment_noop.yaml

IP failover

These templates allow using IP failover for the OpenShift router. In this mode, a virtual IP address is assigned for the OpenShift router. Multiple instances of router may be active but only one instance at a time will have the virtual IP. This ensures that minimal downtime in the case of the failure of the current active router.

By default, IP failover is used when the load balancing mode is Neutron LBaas or None (see section Select Loadbalancer Type).

The virtual IP of the router can be retrieved with

heat output-show --format=raw my-openshift router_ip

Scaling Up or Down

You can manually scale up or down OpenShift nodes by updating node_count heat stack parameter to the desired new count:

heat stack-update -P node_count=5 <other parameters>

If the stack has 2 nodes, 3 new nodes are added. If the stack has 7 nodes, 2 are removed. Any running pods are evacuated from the node being removed.

Autoscaling

Scaling of OpenShift nodes can be automated by using Ceilometer metrics. By default cpu_util metering is used. You can enable autoscaling by autoscaling heat parameter and tweaking properties of cpu_alarm_high and cpu_alarm_low in openshift.yaml.

Removing or Replacing Specific Nodes

Sometimes it’s necessary to remove or replace specific nodes from the stack. For example because of a hardware issue. Because OpenShift "compute" nodes are members of heat AutoScalingGroup adding or removing nodes is by default handled by a scaling policy and when removing a node the oldest one is selected by Heat by default. A specific node can be removed with following steps though:

# delete the node
$ nova delete instance_name

# let heat detect the missing node
$ heat action-check stack_name

# update the stack with desired new number of nodes (same is before
# for replacement, decreased by 1 for removal)
$ heat stack-update <parameters> -P node_count=<desired_count>

Known Bugs

Here is the list of bugs which are not fixed and you may hit.

Customize OpenShift installation

Those Heat templates make use of openshift-ansible to deploy OpenShift. You can provide additional parameters to openshift-ansible by specifying a JSON string as the extra_openshift_ansible_params parameter. For example :

$ heat stack-create <parameters> -P extra_openshift_ansible_params='{"osm_use_cockpit":true}'

This parameter must be used with caution as it may conflict with other parameters passed to openshift-ansible by the Heat templates.

Current Status

  1. The CA certificate used with OpenShift is currently not configurable.

  2. The apps cloud domain is hardcoded for now. We need to make this configurable.

Prebuild images

A customize-disk-image script is provided to preinstall OpenShift packages.

./customize-disk-image --disk rhel7.2.qcow2 --sm-credentials user:password

The modified image must be uploaded into Glance and used as the server image for the heat stack with the server_image parameter.

Copyright 2016 Red Hat, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

More Repositories

1

packstack

Install utility to deploy OpenStack on multiple hosts. This is the GitHub mirror for https://opendev.org/x/packstack.
Python
435
star
2

infrared

Plugin based framework that aims to provide an easy-to-use CLI for Ansible based projects
Jinja
99
star
3

khaleesi

Ansible framework for setting up OpenStack test environments
Python
50
star
4

website

RDO Project Website
Haml
46
star
5

openstack-puppet-modules

Puppet modules shared between Packstack and TripleO
Ruby
39
star
6

openstack-utils

Helper utilities for OpenStack services on Fedora/RHEL distros
Shell
33
star
7

ansible-nfv

Ansible NFV repository holds various playbooks for installation, configuration, tuning, testing and day to day tasks related to NFV and Openstack.
Jinja
22
star
8

openstack-selinux

Extra RHEL SELinux modules for OpenStack
Shell
21
star
9

rdoinfo

DO NOT send Pull Requests here, send reviews to https://review.rdoproject.org/r/#/q/project:rdoinfo
Python
20
star
10

tripleo-quickstart

Ansible roles for setting up TripleO virtual environments and building images
16
star
11

tripleo-workshop

Shell
10
star
12

puppet-pacemaker

Puppet modules to manage pacemaker with corosync
9
star
13

EOL_openstack-puppet

SmokeStack puppet support modules
Ruby
8
star
14

octario

Ansible roles for Openstack Component Testing
Python
8
star
15

nfv-tempest-plugin

This project is a plugin to OpenStack's Tempest used to test NFV usecases.
Python
7
star
16

openstack-packaging-doc

MOVED into main website -> https://github.com/redhat-openstack/website/blob/master/source/documentation/packaging.html.md
CSS
7
star
17

ansible-role-tripleo-collect-logs

A role designed to collect the critical logs, configuration and artifacts from a tripleo deployment
Awk
7
star
18

openstack-foreman-installer

RDO package source
6
star
19

redhat-openstack.github.com

Shell
6
star
20

easyfix

6
star
21

ansible-role-tripleo-image-build

An ansible role used to create and publish quickstart built images
Shell
5
star
22

ansible-role-tripleo-parts

Python
4
star
23

ansible-pacemaker

Python
4
star
24

khaleesi-settings

settings required for executing khaleesi provisioned openstack environments
4
star
25

ansible-role-tripleo-overcloud-validate

A role designed to validate an overcloud using a test heat stack
Shell
4
star
26

ansible-role-tripleo-overcloud

A ansible role used to deploy a tripleo overcloud assuming a running undercloud
Shell
3
star
27

ansible-ovb

Ansible roles for deploying TripleO heat stack
Python
3
star
28

container-validations

Provide a container allowing to run validations
Python
3
star
29

ansible-role-tripleo-inventory

A role designed to add known hosts to an ansible hosts file and ssh config file
Python
3
star
30

image-building-poc

Image Building Service Proof of Concept
Python
3
star
31

python-tempestconf

python-tempestconf is moved to OpenStack http://git.openstack.org/cgit/openstack/python-tempestconf. Please use OpenStack gerrit https://review.openstack.org to submit patches.
3
star
32

ansible-role-tripleo-ssl

A role to add the required function to enable ssl overcloud deployments
Python
3
star
33

ansible-role-tripleo-overcloud-validate-ha

A role designed to validate a tripleo overcloud HA configuration and environment is working properly
Shell
3
star
34

puppet-galera

Puppet module for Galera
Puppet
2
star
35

validations-libs

A collection of python libraries for the Validation Framework
Python
2
star
36

ansible-rhosp

Install RHOSP on existing TripleO setup
Python
2
star
37

ansible-role-tripleo-provision-heat

Python
2
star
38

ansible-role-tripleo-overcloud-upgrade

Ansible role for upgrade a tripleo undercloud/overcloud
Shell
2
star
39

rdo-update

RDO updates
2
star
40

ansible-role-tripleo-backup

Shell
2
star
41

ansible-role-tripleo-gate

Generate RPMs with DLRN for gating TripleO repositories
Python
2
star
42

ansible-role-tripleo-tempest

An ansible role for configuring and executing tempest
Python
2
star
43

EOL_smokestack_helper_scripts

scripts to create repos, diff package SPECs, etc.
Shell
2
star
44

python-tripleo-helper

Python
2
star
45

tripleo-example-integration

A sample tripleo integration implementation
Python
2
star
46

tobiko

Python
1
star
47

rdo-ci-tools

rdo-ci-tools
1
star
48

ansible-role-tripleo-reboot

ansible roles to reboot the undercloud and overcloud nodes and check status
Python
1
star
49

dci-test-rally

embed Rally for a Distributed CI
Shell
1
star
50

ansible-role-tripleo-tempest-undercloud

Python
1
star
51

puppet-openstack-storage

Puppet
1
star
52

ansible-role-tripleo-baremetal-prep-virthost

Shell
1
star
53

openstack-manuals-convert

XSLT
1
star
54

ansible-role-tripleo-reference-template

A reference role that demonstrates how to integrate a third party role into tripleo-quickstart
1
star
55

ansible-role-tripleo-diagnostics

Contains helpful process and performance monitoring, ansible playbook execution history, configuration and snapshot capability. dstat, ara, etc
1
star
56

ansible-role-tripleo-cleanup-nfo

An ansible role designed to fully reset a virtual host back to it's original state prior to a tripleo deployment
Python
1
star
57

ansible-role-ci-centos

An Ansible role to provision and release ephemeral baremetal CI nodes from the ci.centos.org infrastructure
1
star
58

rhos-bootstrap

Python
1
star
59

ansible-role-tripleo-ipa

An ansible role designed to setup, install and integrate an IPA server with keystone in a quickstart deployment.
1
star
60

rdo-docs

RDO non-code stuff like the RDO/OpenStack cheatsheet bookmark, the contents of trunk.rdoproject.org, the various starter presentation slides about RDO and OpenStack, and so on.
HTML
1
star
61

ansible-role-tripleo-overcloud-prep-network

An ansible role for setting up and customizing network isolation on the undercloud for an overcloud deployment.
Shell
1
star
62

ansible-role-tripleo-undercloud-post

Used for the steps taken after the undercloud is installed to prepare for the overcloud deployment
Shell
1
star
63

ansible-role-tripleo-overcloud-scale-nodes

An ansible role used for adding and removing nodes to a tripleo overcloud
Shell
1
star