• Stars
    star
    1,234
  • Rank 38,045 (Top 0.8 %)
  • Language Jinja
  • License
    Apache License 2.0
  • Created over 4 years ago
  • Updated 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

An Ansible AWX operator for Kubernetes built with Operator SDK and Ansible. πŸ€–

AWX Operator

License Build Status Code of Conduct AWX Mailing List IRC Chat - #ansible-awx

An Ansible AWX operator for Kubernetes built with Operator SDK and Ansible.

Table of Contents

Purpose

This operator is meant to provide a more Kubernetes-native installation method for AWX via an AWX Custom Resource Definition (CRD).

Usage

This Kubernetes Operator is meant to be deployed in your Kubernetes cluster(s) and can manage one or more AWX instances in any namespace.

Creating a minikube cluster for testing

If you do not have an existing cluster, the awx-operator can be deployed on a Minikube cluster for testing purposes. Due to different OS and hardware environments, please refer to the official Minikube documentation for further information.

$ minikube start --cpus=4 --memory=6g --addons=ingress
πŸ˜„  minikube v1.23.2 on Fedora 34
✨  Using the docker driver based on existing profile
πŸ‘  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
πŸƒ  Updating the running docker "minikube" container ...
🐳  Preparing Kubernetes v1.22.2 on Docker 20.10.8 ...
πŸ”Ž  Verifying Kubernetes components...
    β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
    β–ͺ Using image k8s.gcr.io/ingress-nginx/controller:v1.0.0-beta.3
    β–ͺ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
    β–ͺ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
πŸ”Ž  Verifying ingress addon...
🌟  Enabled addons: storage-provisioner, default-storageclass, ingress
πŸ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Once Minikube is deployed, check if the node(s) and kube-apiserver communication is working as expected.

$ minikube kubectl -- get nodes
NAME       STATUS   ROLES                  AGE    VERSION
minikube   Ready    control-plane,master   113s   v1.22.2

$ minikube kubectl -- get pods -A
NAMESPACE       NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx   ingress-nginx-admission-create--1-kk67h     0/1     Completed   0          2m1s
ingress-nginx   ingress-nginx-admission-patch--1-7mp2r      0/1     Completed   1          2m1s
ingress-nginx   ingress-nginx-controller-69bdbc4d57-bmwg8   1/1     Running     0          2m
kube-system     coredns-78fcd69978-q7nmx                    1/1     Running     0          2m
kube-system     etcd-minikube                               1/1     Running     0          2m12s
kube-system     kube-apiserver-minikube                     1/1     Running     0          2m16s
kube-system     kube-controller-manager-minikube            1/1     Running     0          2m12s
kube-system     kube-proxy-5mmnw                            1/1     Running     0          2m1s
kube-system     kube-scheduler-minikube                     1/1     Running     0          2m15s
kube-system     storage-provisioner                         1/1     Running     0          2m11s

It is not required for kubectl to be separately installed since it comes already wrapped inside minikube. As demonstrated above, simply prefix minikube kubectl -- before kubectl command, i.e. kubectl get nodes would become minikube kubectl -- get nodes

Let's create an alias for easier usage:

$ alias kubectl="minikube kubectl --"

Basic Install

Once you have a running Kubernetes cluster, you can deploy AWX Operator into your cluster using Kustomize. Since kubectl version 1.14 kustomize functionality is built-in (otherwise, follow the instructions here to install the latest version of Kustomize: https://kubectl.docs.kubernetes.io/installation/kustomize/ )

There is a make target you can run:

make deploy

If you have a custom operator image you have built, you can specify it with:

IMG=quay.io/$YOURNAMESPACE/awx-operator:$YOURTAG make deploy

Otherwise, you can manually create a file called kustomization.yaml with the following content:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  # Find the latest tag here: https://github.com/ansible/awx-operator/releases
  - github.com/ansible/awx-operator/config/default?ref=<tag>

# Set the image tags to match the git version from above
images:
  - name: quay.io/ansible/awx-operator
    newTag: <tag>

# Specify a custom namespace in which to install AWX
namespace: awx

TIP: If you need to change any of the default settings for the operator (such as resources.limits), you can add patches at the bottom of your kustomization.yaml file.

Install the manifests by running this:

$ kubectl apply -k .
namespace/awx created
customresourcedefinition.apiextensions.k8s.io/awxbackups.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxrestores.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxs.awx.ansible.com created
serviceaccount/awx-operator-controller-manager created
role.rbac.authorization.k8s.io/awx-operator-awx-manager-role created
role.rbac.authorization.k8s.io/awx-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/awx-operator-metrics-reader created
clusterrole.rbac.authorization.k8s.io/awx-operator-proxy-role created
rolebinding.rbac.authorization.k8s.io/awx-operator-awx-manager-rolebinding created
rolebinding.rbac.authorization.k8s.io/awx-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/awx-operator-proxy-rolebinding created
configmap/awx-operator-awx-manager-config created
service/awx-operator-controller-manager-metrics-service created
deployment.apps/awx-operator-controller-manager created

Wait a bit and you should have the awx-operator running:

$ kubectl get pods -n awx
NAME                                               READY   STATUS    RESTARTS   AGE
awx-operator-controller-manager-66ccd8f997-rhd4z   2/2     Running   0          11s

So we don't have to keep repeating -n awx, let's set the current namespace for kubectl:

$ kubectl config set-context --current --namespace=awx

Next, create a file named awx-demo.yaml in the same folder with the suggested content below. The metadata.name you provide will be the name of the resulting AWX deployment.

Note: If you deploy more than one AWX instance to the same namespace, be sure to use unique names.

---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
  name: awx-demo
spec:
  service_type: nodeport

It may make sense to create and specify your own secret key for your deployment so that if the k8s secret gets deleted, it can be re-created if needed. If it is not provided, one will be auto-generated, but cannot be recovered if lost. Read more here.

If you are on Openshift, you can take advantage of Routes by specifying the following your spec. This will automatically create a Route for you with a custom hostname. This can be found on the Route section of the Openshift Console.

---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
  name: awx-demo
spec:
  service_type: clusterip
  ingress_type: Route

Make sure to add this new file to the list of "resources" in your kustomization.yaml file:

...
resources:
  - github.com/ansible/awx-operator/config/default?ref=<tag>
  # Add this extra line:
  - awx-demo.yaml
...

Finally, apply the changes to create the AWX instance in your cluster:

kubectl apply -k .

After a few minutes, the new AWX instance will be deployed. You can look at the operator pod logs in order to know where the installation process is at:

$ kubectl logs -f deployments/awx-operator-controller-manager -c awx-manager

After a few seconds, you should see the operator begin to create new resources:

$ kubectl get pods -l "app.kubernetes.io/managed-by=awx-operator"
NAME                        READY   STATUS    RESTARTS   AGE
awx-demo-77d96f88d5-pnhr8   4/4     Running   0          3m24s
awx-demo-postgres-0         1/1     Running   0          3m34s

$ kubectl get svc -l "app.kubernetes.io/managed-by=awx-operator"
NAME                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
awx-demo-postgres   ClusterIP   None           <none>        5432/TCP       4m4s
awx-demo-service    NodePort    10.109.40.38   <none>        80:31006/TCP   3m56s

Once deployed, the AWX instance will be accessible by running:

$ minikube service -n awx awx-demo-service --url

By default, the admin user is admin and the password is available in the <resourcename>-admin-password secret. To retrieve the admin password, run:

$ kubectl get secret awx-demo-admin-password -o jsonpath="{.data.password}" | base64 --decode ; echo
yDL2Cx5Za94g9MvBP6B73nzVLlmfgPjR

You just completed the most basic install of an AWX instance via this operator. Congratulations!!!

For an example using the Nginx Ingress Controller in Minikube, don't miss our demo video.

Helm Install on existing cluster

For those that wish to use Helm to install the awx-operator to an existing K8s cluster:

The helm chart is generated from the helm-chart Makefile section using the starter files in .helm/starter. Consult the documentation on how to customize the AWX resource with your own values.

$ helm repo add awx-operator https://ansible.github.io/awx-operator/
"awx-operator" has been added to your repositories

$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "awx-operator" chart repository
Update Complete. ⎈Happy Helming!⎈

$ helm search repo awx-operator
NAME                            CHART VERSION   APP VERSION     DESCRIPTION
awx-operator/awx-operator       0.17.1          0.17.1          A Helm chart for the AWX Operator

$ helm install -n awx --create-namespace my-awx-operator awx-operator/awx-operator
NAME: my-awx-operator
LAST DEPLOYED: Thu Feb 17 22:09:05 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Helm Chart 0.17.1

Admin user account configuration

There are three variables that are customizable for the admin user account creation.

Name Description Default
admin_user Name of the admin user admin
admin_email Email of the admin user [email protected]
admin_password_secret Secret that contains the admin user password Empty string

⚠️ admin_password_secret must be a Kubernetes secret and not your text clear password.

If admin_password_secret is not provided, the operator will look for a secret named <resourcename>-admin-password for the admin password. If it is not present, the operator will generate a password and create a Secret from it named <resourcename>-admin-password.

To retrieve the admin password, run kubectl get secret <resourcename>-admin-password -o jsonpath="{.data.password}" | base64 --decode ; echo

The secret that is expected to be passed should be formatted as follow:

---
apiVersion: v1
kind: Secret
metadata:
  name: <resourcename>-admin-password
  namespace: <target namespace>
stringData:
  password: mysuperlongpassword

Secret Key Configuration

This key is used to encrypt sensitive data in the database.

Name Description Default
secret_key_secret Secret that contains the symmetric key for encryption Generated

⚠️ secret_key_secret must be a Kubernetes secret and not your text clear secret value.

If secret_key_secret is not provided, the operator will look for a secret named <resourcename>-secret-key for the secret key. If it is not present, the operator will generate a password and create a Secret from it named <resourcename>-secret-key. It is important to not delete this secret as it will be needed for upgrades and if the pods get scaled down at any point. If you are using a GitOps flow, you will want to pass a secret key secret.

The secret should be formatted as follow:

---
apiVersion: v1
kind: Secret
metadata:
  name: custom-awx-secret-key
  namespace: <target namespace>
stringData:
  secret_key: supersecuresecretkey

Then specify the secret name on the AWX spec:

---
spec:
  ...
  secret_key_secret: custom-awx-secret-key

Network and TLS Configuration

Service Type

If the service_type is not specified, the ClusterIP service will be used for your AWX Tower service.

The service_type supported options are: ClusterIP, LoadBalancer and NodePort.

The following variables are customizable for any service_type

Name Description Default
service_labels Add custom labels Empty string
service_annotations Add service annotations Empty string
---
spec:
  ...
  service_type: ClusterIP
  service_annotations: |
    environment: testing
  service_labels: |
    environment: testing
  • LoadBalancer

The following variables are customizable only when service_type=LoadBalancer

Name Description Default
loadbalancer_protocol Protocol to use for Loadbalancer ingress http
loadbalancer_port Port used for Loadbalancer ingress 80
loadbalancer_ip Assign Loadbalancer IP ''
---
spec:
  ...
  service_type: LoadBalancer
  loadbalancer_ip: '192.168.10.25'
  loadbalancer_protocol: https
  loadbalancer_port: 443
  service_annotations: |
    environment: testing
  service_labels: |
    environment: testing

When setting up a Load Balancer for HTTPS you will be required to set the loadbalancer_port to move the port away from 80.

The HTTPS Load Balancer also uses SSL termination at the Load Balancer level and will offload traffic to AWX over HTTP.

  • NodePort

The following variables are customizable only when service_type=NodePort

Name Description Default
nodeport_port Port used for NodePort 30080
---
spec:
  ...
  service_type: NodePort
  nodeport_port: 30080

Ingress Type

By default, the AWX operator is not opinionated and won't force a specific ingress type on you. So, when the ingress_type is not specified, it will default to none and nothing ingress-wise will be created.

The ingress_type supported options are: none, ingress and route. To toggle between these options, you can add the following to your AWX CRD:

  • None
---
spec:
  ...
  ingress_type: none
  • Generic Ingress Controller

The following variables are customizable when ingress_type=ingress. The ingress type creates an Ingress resource as documented which can be shared with many other Ingress Controllers as listed.

Name Description Default
ingress_annotations Ingress annotations Empty string
ingress_tls_secret Secret that contains the TLS information Empty string
ingress_class_name Define the ingress class name Cluster default
hostname Define the FQDN {{ meta.name }}.example.com
ingress_path Define the ingress path to the service /
ingress_path_type Define the type of the path (for LBs) Prefix
ingress_api_version Define the Ingress resource apiVersion 'networking.k8s.io/v1'
---
spec:
  ...
  ingress_type: ingress
  hostname: awx-demo.example.com
  ingress_annotations: |
    environment: testing
Specialized Ingress Controller configuration

Some Ingress Controllers need a special configuration to fully support AWX, add the following value with the ingress_controller variable, if you are using one of these:

Ingress Controller name value
Contour contour
---
spec:
  ...
  ingress_type: ingress
  hostname: awx-demo.example.com
  ingress_controller: contour
  • Route

The following variables are customizable when ingress_type=route

Name Description Default
route_host Common name the route answers for <instance-name>-<namespace>-<routerCanonicalHostname>
route_tls_termination_mechanism TLS Termination mechanism (Edge, Passthrough) Edge
route_tls_secret Secret that contains the TLS information Empty string
route_api_version Define the Route resource apiVersion 'route.openshift.io/v1'
---
spec:
  ...
  ingress_type: route
  route_host: awx-demo.example.com
  route_tls_termination_mechanism: Passthrough
  route_tls_secret: custom-route-tls-secret-name

Database Configuration

Postgres Version

The default Postgres version for the version of AWX bundled with the latest version of the awx-operator is Postgres 13. You can find this default for a given version by at the default value for _postgres_image_version.

We only have coverage for the default version of Postgres. Newer versions of Postgres (14+) will likely work, but should only be configured as an external database. If your database is managed by the awx-operator (default if you don't specify a postgres_configuration_secret), then you should not override the default version as this may cause issues when awx-operator tries to upgrade your postgresql pod.

External PostgreSQL Service

To configure AWX to use an external database, the Custom Resource needs to know about the connection details. To do this, create a k8s secret with those connection details and specify the name of the secret as postgres_configuration_secret at the CR spec level.

The secret should be formatted as follows:

---
apiVersion: v1
kind: Secret
metadata:
  name: <resourcename>-postgres-configuration
  namespace: <target namespace>
stringData:
  host: <external ip or url resolvable by the cluster>
  port: <external port, this usually defaults to 5432>
  database: <desired database name>
  username: <username to connect as>
  password: <password to connect with>
  sslmode: prefer
  type: unmanaged
type: Opaque

Please ensure that the value for the variable password should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration.

It is possible to set a specific username, password, port, or database, but still have the database managed by the operator. In this case, when creating the postgres-configuration secret, the type: managed field should be added.

Note: The variable sslmode is valid for external databases only. The allowed values are: prefer, disable, allow, require, verify-ca, verify-full.

Once the secret is created, you can specify it on your spec:

---
spec:
  ...
  postgres_configuration_secret: <name-of-your-secret>

Migrating data from an old AWX instance

For instructions on how to migrate from an older version of AWX, see migration.md.

Managed PostgreSQL Service

If you don't have access to an external PostgreSQL service, the AWX operator can deploy one for you along side the AWX instance itself.

The following variables are customizable for the managed PostgreSQL service

Name Description Default
postgres_image Path of the image to pull postgres:12
postgres_init_container_resource_requirements Database init container resource requirements requests: {cpu: 10m, memory: 64Mi}
postgres_resource_requirements PostgreSQL container resource requirements requests: {cpu: 10m, memory: 64Mi}
postgres_storage_requirements PostgreSQL container storage requirements requests: {storage: 8Gi}
postgres_storage_class PostgreSQL PV storage class Empty string
postgres_data_path PostgreSQL data path /var/lib/postgresql/data/pgdata
postgres_priority_class Priority class used for PostgreSQL pod Empty string

Example of customization could be:

---
spec:
  ...
  postgres_resource_requirements:
    requests:
      cpu: 500m
      memory: 2Gi
    limits:
      cpu: '1'
      memory: 4Gi
  postgres_storage_requirements:
    requests:
      storage: 8Gi
    limits:
      storage: 50Gi
  postgres_storage_class: fast-ssd
  postgres_extra_args:
    - '-c'
    - 'max_connections=1000'

Note: If postgres_storage_class is not defined, Postgres will store it's data on a volume using the default storage class for your cluster.

Advanced Configuration

Deploying a specific version of AWX

There are a few variables that are customizable for awx the image management.

Name Description Default
image Path of the image to pull quay.io/ansible/awx
image_version Image version to pull value of DEFAULT_AWX_VERSION or latest
image_pull_policy The pull policy to adopt IfNotPresent
image_pull_secrets The pull secrets to use None
ee_images A list of EEs to register quay.io/ansible/awx-ee:latest
redis_image Path of the image to pull docker.io/redis
redis_image_version Image version to pull latest

Example of customization could be:

---
spec:
  ...
  image: myorg/my-custom-awx
  image_version: latest
  image_pull_policy: Always
  image_pull_secrets:
    - pull_secret_name
  ee_images:
    - name: my-custom-awx-ee
      image: myorg/my-custom-awx-ee

Note: The image and image_version are intended for local mirroring scenarios. Please note that using a version of AWX other than the one bundled with the awx-operator is not supported. For the default values, check the main.yml file.

Redis container capabilities

Depending on your kubernetes cluster and settings you might need to grant some capabilities to the redis container so it can start. Set the redis_capabilities option so the capabilities are added in the deployment.

---
spec:
  ...
  redis_capabilities:
    - CHOWN
    - SETUID
    - SETGID

Privileged Tasks

Depending on the type of tasks that you'll be running, you may find that you need the task pod to run as privileged. This can open yourself up to a variety of security concerns, so you should be aware (and verify that you have the privileges) to do this if necessary. In order to toggle this feature, you can add the following to your custom resource:

---
spec:
  ...
  task_privileged: true

If you are attempting to do this on an OpenShift cluster, you will need to grant the awx ServiceAccount the privileged SCC, which can be done with:

$ oc adm policy add-scc-to-user privileged -z awx

Again, this is the most relaxed SCC that is provided by OpenShift, so be sure to familiarize yourself with the security concerns that accompany this action.

Containers HostAliases Requirements

Sometimes you might need to use HostAliases in web/task containers.

Name Description Default
host_aliases A list of HostAliases None

Example of customization could be:

---
spec:
  ...
  host_aliases:
    - ip: <name-of-your-ip>
      hostnames:
        - <name-of-your-domain>

Containers Resource Requirements

The resource requirements for both, the task and the web containers are configurable - both the lower end (requests) and the upper end (limits).

Name Description Default
web_resource_requirements Web container resource requirements requests: {cpu: 100m, memory: 128Mi}
task_resource_requirements Task container resource requirements requests: {cpu: 100m, memory: 128Mi}
ee_resource_requirements EE control plane container resource requirements requests: {cpu: 100m, memory: 128Mi}

Example of customization could be:

---
spec:
  ...
  web_resource_requirements:
    requests:
      cpu: 250m
      memory: 2Gi
      ephemeral-storage: 100M
    limits:
      cpu: 1000m
      memory: 4Gi
      ephemeral-storage: 500M
  task_resource_requirements:
    requests:
      cpu: 250m
      memory: 1Gi
      ephemeral-storage: 100M
    limits:
      cpu: 2000m
      memory: 2Gi
      ephemeral-storage: 500M
  ee_resource_requirements:
    requests:
      cpu: 250m
      memory: 100Mi
      ephemeral-storage: 100M
    limits:
      cpu: 500m
      memory: 2Gi
      ephemeral-storage: 500M

Priority Classes

The AWX and Postgres pods can be assigned a custom PriorityClass to rank their importance compared to other pods in your cluster, which determines which pods get evicted first if resources are running low. First, create your PriorityClass if needed. Then set the name of your priority class to the control plane and postgres pods as shown below.

---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
  name: awx-demo
spec:
  ...
  control_plane_priority_class: awx-demo-high-priority
  postgres_priority_class: awx-demo-medium-priority

Scaling the Web and Task Pods independently

You can scale replicas up or down for each deployment by using the web_replicas or task_replicas respectively. You can scale all pods across both deployments by using replicas as well. The logic behind these CRD keys acts as such:

  • If you specify the replicas field, the key passed will scale both the web and task replicas to the same number.
  • If web_replicas or task_replicas is ever passed, it will override the existing replicas field on the specific deployment with the new key value.

These new replicas can be constrained in a similar manner to previous single deployments by appending the particular deployment name in front of the constraint used. More about those new constraints can be found below in the Assigning AWX pods to specific nodes section.

Assigning AWX pods to specific nodes

You can constrain the AWX pods created by the operator to run on a certain subset of nodes. node_selector and postgres_selector constrains the AWX pods to run only on the nodes that match all the specified key/value pairs. tolerations and postgres_tolerations allow the AWX pods to be scheduled onto nodes with matching taints. The ability to specify topologySpreadConstraints is also allowed through topology_spread_constraints If you want to use affinity rules for your AWX pod you can use the affinity option.

If you want to constrain the web and task pods individually, you can do so by specificying the deployment type before the specific setting. For example, specifying task_tolerations will allow the AWX task pod to be scheduled onto nodes with matching taints.

Name Description Default
postgres_image Path of the image to pull postgres
postgres_image_version Image version to pull 13
node_selector AWX pods' nodeSelector ''
web_node_selector AWX web pods' nodeSelector ''
task_node_selector AWX task pods' nodeSelector ''
topology_spread_constraints AWX pods' topologySpreadConstraints ''
web_topology_spread_constraints AWX web pods' topologySpreadConstraints ''
task_topology_spread_constraints AWX task pods' topologySpreadConstraints ''
affinity AWX pods' affinity rules ''
web_affinity AWX web pods' affinity rules ''
task_affinity AWX task pods' affinity rules ''
tolerations AWX pods' tolerations ''
web_tolerations AWX web pods' tolerations ''
task_tolerations AWX task pods' tolerations ''
annotations AWX pods' annotations ''
postgres_selector Postgres pods' nodeSelector ''
postgres_tolerations Postgres pods' tolerations ''

Example of customization could be:

---
spec:
  ...
  node_selector: |
    disktype: ssd
    kubernetes.io/arch: amd64
    kubernetes.io/os: linux
  topology_spread_constraints: |
    - maxSkew: 100
      topologyKey: "topology.kubernetes.io/zone"
      whenUnsatisfiable: "ScheduleAnyway"
      labelSelector:
        matchLabels:
          app.kubernetes.io/name: "<resourcename>"
  tolerations: |
    - key: "dedicated"
      operator: "Equal"
      value: "AWX"
      effect: "NoSchedule"
  task_tolerations: |
    - key: "dedicated"
      operator: "Equal"
      value: "AWX_task"
      effect: "NoSchedule"
  postgres_selector: |
    disktype: ssd
    kubernetes.io/arch: amd64
    kubernetes.io/os: linux
  postgres_tolerations: |
    - key: "dedicated"
      operator: "Equal"
      value: "AWX"
      effect: "NoSchedule"
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        preference:
          matchExpressions:
          - key: another-node-label-key
            operator: In
            values:
            - another-node-label-value
            - another-node-label-value
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: security
              operator: In
              values:
              - S2
          topologyKey: topology.kubernetes.io/zone

Trusting a Custom Certificate Authority

In cases which you need to trust a custom Certificate Authority, there are few variables you can customize for the awx-operator.

Trusting a custom Certificate Authority allows the AWX to access network services configured with SSL certificates issued locally, such as cloning a project from from an internal Git server via HTTPS. It is common for these scenarios, experiencing the error unable to verify the first certificate.

Name Description Default
ldap_cacert_secret LDAP Certificate Authority secret name ''
ldap_password_secret LDAP BIND DN Password secret name ''
bundle_cacert_secret Certificate Authority secret name ''
Please note the awx-operator will look for the data field ldap-ca.crt in the specified secret when using the ldap_cacert_secret, whereas the data field bundle-ca.crt is required for bundle_cacert_secret parameter.

Example of customization could be:

---
spec:
  ...
  ldap_cacert_secret: <resourcename>-custom-certs
  ldap_password_secret: <resourcename>-ldap-password
  bundle_cacert_secret: <resourcename>-custom-certs

Create the secret with kustomization.yaml file:

....

secretGenerator:
  - name: <resourcename>-custom-certs
    files:
      - bundle-ca.crt=<path+filename>
    options:
      disableNameSuffixHash: true
      
...

Create the secret with CLI:

  • Certificate Authority secret
# kubectl create secret generic <resourcename>-custom-certs \
    --from-file=ldap-ca.crt=<PATH/TO/YOUR/CA/PEM/FILE>  \
    --from-file=bundle-ca.crt=<PATH/TO/YOUR/CA/PEM/FILE>
  • LDAP BIND DN Password secret
# kubectl create secret generic <resourcename>-ldap-password \
    --from-literal=ldap-password=<your_ldap_dn_password>

Enabling LDAP Integration at AWX bootstrap

A sample of extra settings can be found as below. All possible options can be found here: https://django-auth-ldap.readthedocs.io/en/latest/reference.html#settings

NOTE: These values are inserted into a Python file, so pay close attention to which values need quotes and which do not.

    - setting: AUTH_LDAP_SERVER_URI
      value: >-
        "ldaps://ad01.abc.com:636 ldaps://ad02.abc.com:636"

    - setting: AUTH_LDAP_BIND_DN
      value: >-
        "CN=LDAP User,OU=Service Accounts,DC=abc,DC=com"

    - setting: AUTH_LDAP_USER_SEARCH
      value: 'LDAPSearch("DC=abc,DC=com",ldap.SCOPE_SUBTREE,"(sAMAccountName=%(user)s)",)'

    - setting: AUTH_LDAP_GROUP_SEARCH
      value: 'LDAPSearch("OU=Groups,DC=abc,DC=com",ldap.SCOPE_SUBTREE,"(objectClass=group)",)'

    - setting: AUTH_LDAP_GROUP_TYPE
      value: 'GroupOfNamesType()'

    - setting: AUTH_LDAP_USER_ATTR_MAP
      value: '{"first_name": "givenName","last_name": "sn","email": "mail"}'

    - setting: AUTH_LDAP_REQUIRE_GROUP
      value: >-
        "CN=operators,OU=Groups,DC=abc,DC=com"
    - setting: AUTH_LDAP_USER_FLAGS_BY_GROUP
      value: {
        "is_superuser": [
          "CN=admin,OU=Groups,DC=abc,DC=com"
        ]
      }


    - setting: AUTH_LDAP_ORGANIZATION_MAP
      value: {
        "abc": {
          "admins": "CN=admin,OU=Groups,DC=abc,DC=com",
          "remove_users": false,
          "remove_admins": false,
          "users": true
        }
      }

    - setting: AUTH_LDAP_TEAM_MAP
      value: {
        "admin": {
          "remove": true,
          "users": "CN=admin,OU=Groups,DC=abc,DC=com",
          "organization": "abc"
        }
      }

Persisting Projects Directory

In cases which you want to persist the /var/lib/projects directory, there are few variables that are customizable for the awx-operator.

Name Description Default
projects_persistence Whether or not the /var/lib/projects directory will be persistent false
projects_storage_class Define the PersistentVolume storage class ''
projects_storage_size Define the PersistentVolume size 8Gi
projects_storage_access_mode Define the PersistentVolume access mode ReadWriteMany
projects_existing_claim Define an existing PersistentVolumeClaim to use (cannot be combined with projects_storage_*) ''

Example of customization when the awx-operator automatically handles the persistent volume could be:

---
spec:
  ...
  projects_persistence: true
  projects_storage_class: rook-ceph
  projects_storage_size: 20Gi

Custom Volume and Volume Mount Options

In a scenario where custom volumes and volume mounts are required to either overwrite defaults or mount configuration files.

Name Description Default
extra_volumes Specify extra volumes to add to the application pod ''
web_extra_volume_mounts Specify volume mounts to be added to Web container ''
task_extra_volume_mounts Specify volume mounts to be added to Task container ''
rsyslog_extra_volume_mounts Specify volume mounts to be added to Rsyslog container ''
ee_extra_volume_mounts Specify volume mounts to be added to Execution container ''
init_container_extra_volume_mounts Specify volume mounts to be added to Init container ''
init_container_extra_commands Specify additional commands for Init container ''

⚠️ The ee_extra_volume_mounts and extra_volumes will only take effect to the globally available Execution Environments. For custom ee, please customize the Pod spec.

Example configuration for ConfigMap

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: <resourcename>-extra-config
  namespace: <target namespace>
data:
  ansible.cfg: |
     [defaults]
     remote_tmp = /tmp
     [ssh_connection]
     ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s
  custom.py:  |
      INSIGHTS_URL_BASE = "example.org"
      AWX_CLEANUP_PATHS = True

Example spec file for volumes and volume mounts

---
    spec:
    ...
      extra_volumes: |
        - name: ansible-cfg
          configMap:
            defaultMode: 420
            items:
              - key: ansible.cfg
                path: ansible.cfg
            name: <resourcename>-extra-config
        - name: custom-py
          configMap:
            defaultMode: 420
            items:
              - key: custom.py
                path: custom.py
            name: <resourcename>-extra-config
        - name: shared-volume
          persistentVolumeClaim:
            claimName: my-external-volume-claim

      init_container_extra_volume_mounts: |
        - name: shared-volume
          mountPath: /shared

      init_container_extra_commands: |
        # set proper permissions (rwx) for the awx user
        chmod 775 /shared
        chgrp 1000 /shared

      ee_extra_volume_mounts: |
        - name: ansible-cfg
          mountPath: /etc/ansible/ansible.cfg
          subPath: ansible.cfg

      task_extra_volume_mounts: |
        - name: custom-py
          mountPath: /etc/tower/conf.d/custom.py
          subPath: custom.py
        - name: shared-volume
          mountPath: /shared

⚠️ Volume and VolumeMount names cannot contain underscores(_)

Custom UWSGI Configuration

We allow the customization of two UWSGI parameters:

  • processes with uwsgi_processes (default 5)
  • listen with uwsgi_listen_queue_size (default 128)

Note: Increasing the listen queue beyond 128 requires that the sysctl setting net.core.somaxconn be set to an equal value or higher. The operator will set the appropriate securityContext sysctl value for you, but it is a required that this sysctl be added to an allowlist on the kubelet level. See kubernetes docs about allowing this sysctl setting.

These vars relate to the vertical and horizontal scalibility of the web service.

Increasing the number of processes allows more requests to be actively handled per web pod, but will consume more CPU and Memory and the resource requests should be increased in tandem. Increasing the listen queue allows uwsgi to queue up requests not yet being handled by the active worker processes, which may allow the web pods to handle more "bursty" request patterns if many requests (more than 128) tend to come in a short period of time, but can all be handled before any other time outs may apply. Also see related nginx configuration.

Custom Nginx Configuration

Using the extra_volumes feature, it is possible to extend the nginx.conf.

  1. Create a ConfigMap with the extra settings you want to include in the nginx.conf
  2. Create an extra_volumes entry in the AWX spec for this ConfigMap
  3. Create an web_extra_volume_mounts entry in the AWX spec to mount this volume

The AWX nginx config automatically includes /etc/nginx/conf.d/*.conf if present.

Additionally there are some global configuration values in the base nginx config that are available for setting with individual variables.

These vars relate to the vertical and horizontal scalibility of the web service.

Increasing the number of processes allows more requests to be actively handled per web pod, but will consume more CPU and Memory and the resource requests should be increased in tandem. Increasing the listen queue allows nginx to queue up requests not yet being handled by the active worker processes, which may allow the web pods to handle more "bursty" request patterns if many requests (more than 128) tend to come in a short period of time, but can all be handled before any other time outs may apply. Also see related uwsgi configuration.

Custom Favicon

You can use custom volume mounts to mount in your own favicon to be displayed in your AWX browser tab.

First, Create the configmap from a local favicon.ico file.

$ oc create configmap favicon-configmap --from-file favicon.ico

Then specify the extra_volume and web_extra_volume_mounts on your AWX CR spec

spec:
  extra_volumes: |
    - name: favicon
      configMap:
        defaultMode: 420
        items:
          - key: favicon.ico
            path: favicon.ico
        name: favicon-configmap
  web_extra_volume_mounts: |
    - name: favicon
      mountPath: /var/lib/awx/public/static/media/favicon.ico
      subPath: favicon.ico

Default execution environments from private registries

In order to register default execution environments from private registries, the Custom Resource needs to know about the pull credentials. Those credentials should be stored as a secret and either specified as ee_pull_credentials_secret at the CR spec level, or simply be present on the namespace under the name <resourcename>-ee-pull-credentials . Instance initialization will register a Container registry type credential on the deployed instance and assign it to the registered default execution environments.

The secret should be formatted as follows:

---
apiVersion: v1
kind: Secret
metadata:
  name: <resourcename>-ee-pull-credentials
  namespace: <target namespace>
stringData:
  url: <registry url. i.e. quay.io>
  username: <username to connect as>
  password: <password to connect with>
  ssl_verify: <Optional attribute. Whether verify ssl connection or not. Accepted values "True" (default), "False" >
type: Opaque
Control plane ee from private registry

The images listed in "ee_images" will be added as globally available Execution Environments. The "control_plane_ee_image" will be used to run project updates. In order to use a private image for any of these you'll need to use image_pull_secrets to provide a list of k8s pull secrets to access it. Currently the same secret is used for any of these images supplied at install time.

You can create image_pull_secret

kubectl create secret <resoucename>-cp-pull-credentials regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>

If you need more control (for example, to set a namespace or a label on the new secret) then you can customize the Secret before storing it

Example spec file extra-config

---
apiVersion: v1
kind: Secret
metadata:
  name: <resoucename>-cp-pull-credentials
  namespace: <target namespace>
data:
  .dockerconfigjson: <base64 docker config>
type: kubernetes.io/dockerconfigjson

Exporting Environment Variables to Containers

If you need to export custom environment variables to your containers.

Name Description Default
task_extra_env Environment variables to be added to Task container ''
web_extra_env Environment variables to be added to Web container ''
rsyslog_extra_env Environment variables to be added to Rsyslog container ''
ee_extra_env Environment variables to be added to EE container ''

⚠️ The ee_extra_env will only take effect to the globally available Execution Environments. For custom ee, please customize the Pod spec.

Example configuration of environment variables

  spec:
    task_extra_env: |
      - name: MYCUSTOMVAR
        value: foo
    web_extra_env: |
      - name: MYCUSTOMVAR
        value: foo
    rsyslog_extra_env: |
      - name: MYCUSTOMVAR
        value: foo
    ee_extra_env: |
      - name: MYCUSTOMVAR
        value: foo

CSRF Cookie Secure Setting

With csrf_cookie_secure, you can pass the value for CSRF_COOKIE_SECURE to /etc/tower/settings.py

Name Description Default
csrf_cookie_secure CSRF Cookie Secure ''

Example configuration of the csrf_cookie_secure setting:

  spec:
    csrf_cookie_secure: 'False'

Session Cookie Secure Setting

With session_cookie_secure, you can pass the value for SESSION_COOKIE_SECURE to /etc/tower/settings.py

Name Description Default
session_cookie_secure Session Cookie Secure ''

Example configuration of the session_cookie_secure setting:

  spec:
    session_cookie_secure: 'False'

Extra Settings

Withextra_settings, you can pass multiple custom settings via the awx-operator. The parameter extra_settings will be appended to the /etc/tower/settings.py and can be an alternative to the extra_volumes parameter.

Name Description Default
extra_settings Extra settings ''

Note: Parameters configured in extra_settings are set as read-only settings in AWX. As a result, they cannot be changed in the UI after deployment. If you need to change the setting after the initial deployment, you need to change it on the AWX CR spec.

Example configuration of extra_settings parameter

  spec:
    extra_settings:
      - setting: MAX_PAGE_SIZE
        value: "500"

      - setting: AUTH_LDAP_BIND_DN
        value: "cn=admin,dc=example,dc=com"

      - setting: LOG_AGGREGATOR_LEVEL
        value: "'DEBUG'"

Note for some settings, such as LOG_AGGREGATOR_LEVEL, the value may need double quotes.

No Log

Configure no_log for tasks with no_log

Name Description Default
no_log No log configuration 'true'

Example configuration of no_log parameter

  spec:
    no_log: true

Auto upgrade

With this parameter you can influence the behavior during an operator upgrade.
If set to true, the operator will upgrade the specific instance directly.
When the value is set to false, and we have a running deployment, the operator will not update the AWX instance.
This can be useful when you have multiple AWX instances which you want to upgrade step by step instead of all at once.

Name Description Default
auto_upgrade Automatic upgrade of AWX instances true

Example configuration of auto_upgrade parameter

  spec:
    auto_upgrade: true
Upgrade of instances without auto upgrade

There are two ways to upgrade instances which are marked with the 'auto_upgrade: false' flag.

Changing flags:

  • change the auto_upgrade flag on your AWX object to true
  • wait until the upgrade process of that instance is finished
  • change the auto_upgrade flag on your AWX object back to false

Delete the deployment:

  • delete the deployment object of your AWX instance
$ kubectl -n awx delete deployment <yourInstanceName> 
  • wait until the instance gets redeployed

Service Account

If you need to modify some ServiceAccount proprieties

Name Description Default
service_account_annotations Annotations to the ServiceAccount ''

Example configuration of environment variables

  spec:
    service_account_annotations: |
      eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT_ID>:role/<IAM_ROLE_NAME>

Labeling operator managed objects

In certain situations labeling of Kubernetes objects managed by the operator might be desired (e.g. for owner identification purposes). For that additional_labels parameter could be used

Name Description Default
additional_labels Additional labels defined on the resource, which should be propagated to child resources []

Example configuration where only my/team and my/service labels will be propagated to child objects (Deployment, Secrets, ServiceAccount, etc):

apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
  name: awx-demo
  labels:
    my/team: "foo"
    my/service: "bar"
    my/do-not-inherit: "yes"
spec:
  additional_labels:
  - my/team
  - my/service
...

Pods termination grace period

During deployment restarts or new rollouts, when old ReplicaSet Pods are being terminated, the corresponding jobs which are managed (executed or controlled) by old AWX Pods may end up in Error state as there is no mechanism to transfer them to the newly spawned AWX Pods. To work around the problem one could set termination_grace_period_seconds in AWX spec, which does the following:

  • It sets the corresponding terminationGracePeriodSeconds Pod spec of the AWX Deployment to the value provided

    The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal

  • It adds a PreStop hook script, which will keep AWX Pods in terminating state until it finished, up to terminationGracePeriodSeconds.

    This grace period applies to the total time it takes for both the PreStop hook to execute and for the Container to stop normally

    While the hook script just waits until the corresponding AWX Pod (instance) no longer has any managed jobs, in which case it finishes with success and hands over the overall Pod termination process to normal AWX processes.

One may want to set this value to the maximum duration they accept to wait for the affected Jobs to finish. Keeping in mind that such finishing jobs may increase Pods termination time in such situations as kubectl rollout restart, AWX upgrade by the operator, or Kubernetes API-initiated evictions.

Name Description Default
termination_grace_period_seconds Optional duration in seconds pods needs to terminate gracefully not set

Uninstall

To uninstall an AWX deployment instance, you basically need to remove the AWX kind related to that instance. For example, to delete an AWX instance named awx-demo, you would do:

$ kubectl delete awx awx-demo
awx.awx.ansible.com "awx-demo" deleted

Deleting an AWX instance will remove all related deployments and statefulsets, however, persistent volumes and secrets will remain. To enforce secrets also getting removed, you can use garbage_collect_secrets: true.

Note: If you ever intend to recover an AWX from an existing database you will need a copy of the secrets in order to perform a successful recovery.

Upgrading

To upgrade AWX, it is recommended to upgrade the awx-operator to the version that maps to the desired version of AWX. To find the version of AWX that will be installed by the awx-operator by default, check the version specified in the image_version variable in roles/installer/defaults/main.yml for that particular release.

Apply the awx-operator.yml for that release to upgrade the operator, and in turn also upgrade your AWX deployment.

Backup

The first part of any upgrade should be a backup. Note, there are secrets in the pod which work in conjunction with the database. Having just a database backup without the required secrets will not be sufficient for recovering from an issue when upgrading to a new version. See the backup role documentation for information on how to backup your database and secrets.

In the event you need to recover the backup see the restore role documentation. Before Restoring from a backup, be sure to:

  • delete the old existing AWX CR
  • delete the persistent volume claim (PVC) for the database from the old deployment, which has a name like postgres-13-<deployment-name>-postgres-13-0

Note: Do not delete the namespace/project, as that will delete the backup and the backup's PVC as well.

PostgreSQL Upgrade Considerations

If there is a PostgreSQL major version upgrade, after the data directory on the PVC is migrated to the new version, the old PVC is kept by default. This provides the ability to roll back if needed, but can take up extra storage space in your cluster unnecessarily. You can configure it to be deleted automatically after a successful upgrade by setting the following variable on the AWX spec.

  spec:
    postgres_keep_pvc_after_upgrade: False

v0.14.0

Cluster-scope to Namespace-scope considerations

Starting with awx-operator 0.14.0, AWX can only be deployed in the namespace that the operator exists in. This is called a namespace-scoped operator. If you are upgrading from an earlier version, you will want to delete your existing awx-operator service account, role and role binding.

Project is now based on v1.x of the operator-sdk project

Starting with awx-operator 0.14.0, the project is now based on operator-sdk 1.x. You may need to manually delete your old operator Deployment to avoid issues.

Steps to upgrade

Delete your old AWX Operator and existing awx-operator service account, role and role binding in default namespace first:

$ kubectl -n default delete deployment awx-operator
$ kubectl -n default delete serviceaccount awx-operator
$ kubectl -n default delete clusterrolebinding awx-operator
$ kubectl -n default delete clusterrole awx-operator

Then install the new AWX Operator by following the instructions in Basic Install. The NAMESPACE environment variable have to be the name of the namespace in which your old AWX instance resides.

Once the new AWX Operator is up and running, your AWX deployment will also be upgraded.

Disable IPV6

Starting with AWX Operator release 0.24.0,IPV6 was enabled in ngnix configuration which causes upgrades and installs to fail in environments where IPv6 is not allowed. Starting in 1.1.1 release, you can set the ipv6_disabled flag on the AWX spec. If you need to use an AWX operator version between 0.24.0 and 1.1.1 in an IPv6 disabled environment, it is suggested to enabled ipv6 on worker nodes.

In order to disable ipv6 on ngnix configuration (awx-web container), add following to the AWX spec.

The following variables are customizable

Name Description Default
ipv6_disabled Flag to disable ipv6 false
spec:
  ipv6_disabled: true

Adding Execution Nodes

Starting with AWX Operator v0.30.0 and AWX v21.7.0, standalone execution nodes can be added to your deployments. See AWX execution nodes docs for information about this feature.

Custom Receptor CA

The control nodes on the K8S cluster will communicate with execution nodes via mutual TLS TCP connections, running via Receptor. Execution nodes will verify incoming connections by ensuring the x509 certificate was issued by a trusted Certificate Authority (CA).

A user may wish to provide their own CA for this validation. If no CA is provided, AWX Operator will automatically generate one using OpenSSL.

Given custom ca.crt and ca.key stored locally, run the following,

kubectl create secret tls awx-demo-receptor-ca \
   --cert=/path/to/ca.crt --key=/path/to/ca.key

The secret should be named {AWX Custom Resource name}-receptor-ca. In the above the AWX CR name is "awx-demo". Please replace "awx-demo" with your AWX Custom Resource name.

If this secret is created after AWX is deployed, run the following to restart the deployment,

kubectl rollout restart deployment awx-demo

Important Note, changing the receptor CA will break connections to any existing execution nodes. These nodes will enter an unavailable state, and jobs will not be able to run on them. Users will need to download and re-run the install bundle for each execution node. This will replace the TLS certificate files with those signed by the new CA. The execution nodes should then appear in a ready state after a few minutes.

Contributing

Please visit our contributing guidelines.

Release Process

The first step is to create a draft release. Typically this will happen in the Stage Release workflow for AWX and you don't need to do it as a separate step.

If you need to do an independent release of the operator, you can run the Stage Release in the awx-operator repo. Both of these workflows will run smoke tests, so there is no need to do this manually.

After the draft release is created, publish it and the Promote AWX Operator image will run, which will:

  • Publish image to Quay
  • Release Helm chart

Author

This operator was originally built in 2019 by Jeff Geerling and is now maintained by the Ansible Team

Code of Conduct

We ask all of our community members and contributors to adhere to the Ansible code of conduct. If you have questions or need assistance, please reach out to our community team at [email protected]

Get Involved

We welcome your feedback and ideas. The AWX operator uses the same mailing list and IRC channel as AWX itself. Here's how to reach us with feedback and questions:

  • Join the #ansible-awx channel on irc.libera.chat
  • Join the mailing list

More Repositories

1

ansible

Ansible is a radically simple IT automation platform that makes your applications and systems easier to deploy and maintain. Automate everything from code deployment to network configuration to cloud management, in a language that approaches plain English, using SSH, with no agents to install on remote systems. https://docs.ansible.com.
Python
58,550
star
2

awx

AWX provides a web-based user interface, REST API, and task engine built on top of Ansible. It is one of the upstream projects for Red Hat Ansible Automation Platform.
Python
13,864
star
3

ansible-examples

A few starter examples of ansible playbooks, to show features and how they work together. See http://galaxy.ansible.com for example roles from the Ansible community for deploying many popular applications.
Shell
11,590
star
4

molecule

Molecule aids in the development and testing of Ansible content: collections, playbooks and roles
Python
3,849
star
5

ansible-lint

ansible-lint checks playbooks for practices and behavior that could potentially be improved and can fix some of the most common ones for you
Python
3,436
star
6

ansible-container

DEPRECATED -- Ansible Container was a tool to build Docker images and orchestrate containers using only Ansible playbooks.
Python
2,191
star
7

workshops

Training Course for Ansible Automation Platform
Jinja
1,708
star
8

ansible-modules-core

Ansible modules - these modules ship with ansible
Python
1,279
star
9

ansible-runner

A tool and python library that helps when interfacing with Ansible directly or as part of another system whether that be through a container image interface, as a standalone tool, or as a Python module that can be imported. The goal is to provide a stable and consistent interface abstraction to Ansible.
Python
949
star
10

ansible-modules-extras

Ansible extra modules - these modules ship with ansible
Python
942
star
11

galaxy

Legacy Galaxy still available as read-only on https://old-galaxy.ansible.com - looking for the new galaxy -> https://github.com/ansible/galaxy_ng
Python
850
star
12

ansible-jupyter-kernel

Jupyter Notebook Kernel for running Ansible Tasks and Playbooks
Python
518
star
13

community

This repository is being archived. See https://github.com/ansible-community/presentations and https://github.com/ansible-community/meetings for the new locations
HTML
487
star
14

lightbulb

Lightbulb has been deprecated and replaced by Ansible Workshops
HTML
480
star
15

ansible-lockdown

Archived, new content in https://github.com/ansible-lockdown
454
star
16

ansible-docker-base

Ansible base Images for easy Ansible-Playbook-based Docker builds
406
star
17

ansible-navigator

A text-based user interface (TUI) for Ansible.
Python
369
star
18

tower-cli

THIS TOOL IS NO LONGER UNDER ACTIVE DEVELOPMENT. This tool is being phased out in favor of the new official AWX CLI
Python
364
star
19

vscode-ansible

vscode/vscodium extension for providing Ansible auto-completion and integrating quality assurance tools like ansible-lint, ansible syntax check, yamllint, molecule and ansible-test.
TypeScript
352
star
20

test-playbooks

playbook-tests
Python
346
star
21

event-driven-ansible

Ansible Collection for EDA
Python
274
star
22

ansible-builder

An Ansible execution environment builder
Python
264
star
23

ansible-lint-action

❗️Replaced by https://github.com/marketplace/actions/run-ansible-lint
254
star
24

ansible-language-server

🚧 Ansible Language Server codebase is now included in vscode-ansible repository
TypeScript
248
star
25

galaxy_ng

Ansible Galaxy Server - Issues on https://forum.ansible.com Docs on https://galaxy-ng.readthedocs.io/
Python
213
star
26

ansibullbot

Bot for management of Ansible issues and PRs on GitHub.
Python
203
star
27

ansible-runner-service

Python
200
star
28

terraform-provider-ansible

community terraform provider for ansible
Go
192
star
29

ansible-rulebook

Python
190
star
30

product-demos

Jinja
184
star
31

receptor

Project Receptor is a flexible multi-service relayer with remote execution and orchestration capabilities linking controllers with executors across a mesh of nodes.
Go
160
star
32

pytest-ansible

A pytest plugin that enables the use of ansible in tests, enables the use of pytest as a collection unit test runner, and exposes molecule scnearios through a pytest fixture.
Python
149
star
33

awx-ee

An Ansible execution environment for AWX project
137
star
34

creator-ee

Ansible Execution environment targeted for content creators. It includes most development tools such ansible-lint, molecule, ...
Shell
117
star
35

mazer

Experimental Ansible Galaxy Content Manager
Python
114
star
36

ansible-for-rubyists

Ansible is written in Python, but you can write modules in any language. Here are some Ruby examples to get you started.
Ruby
108
star
37

immutablish-deploys

Python
99
star
38

proposals

Repository for sharing and tracking progress on enhancement proposals for Ansible.
91
star
39

ansible-documentation

Ansible community documentation
Python
83
star
40

ansible-container-examples

A few starter applications to demonstrate features and provide examples.
Python
76
star
41

ansible-creator

The fastest way to generate all your ansible content!
Python
75
star
42

ansible-kubernetes-modules

DEPRECATED Ansible role containing pre-release K8s modules
Python
73
star
43

instruqt

Self-paced instruqt Training material
Shell
71
star
44

ansible-ui

Ansible UI
TypeScript
67
star
45

ansible-hub-ui

Ansible Automation Hub UI
TypeScript
66
star
46

tacacs_plus

A Python-based TACACS+ client that supports authentication, authorization and accounting.
Python
64
star
47

ansible-dev-tools

Ansible automation developer tools
Python
63
star
48

ansible-container-demo

Manage the application lifecycle from development to deployment using Ansible Container
JavaScript
61
star
49

pytest-mp

multiprocessing.Process(target=pytest_runtest_protocol, args=(your_test, None))
Python
61
star
50

pylibssh

Python bindings specific to Ansible use case for libssh https://www.libssh.org/
Cython
60
star
51

galaxy_collection

Collection of modules and roles to configure Automation Hub
Jinja
58
star
52

autoscaling-blog

Companion playbooks to an article at http://www.ansible.com/blog/autoscaling-infrastructures
56
star
53

tox-ansible

The tox-ansible plugin dynamically creates a full matrix of python interpreter and ansible-core version environments for running integration, sanity, and unit for an ansible collection both locally and in a Github action. tox virtual environments are leveraged for collection building, collection installation, dependency installation, and testing.
Python
53
star
54

ansible-tower-samples

Ansible Tower Playbook Samples
46
star
55

schemas

❗️Schemas are now managed inside ansible-lint project
TypeScript
44
star
56

ansible-baseline

A baseline playbook for testing Ansible performance
Python
41
star
57

awx-resource-operator

Jinja
41
star
58

role-secure-docker-daemon

Ansible role to generate server and client certificates for your docker daemon
Shell
38
star
59

workshop-examples

This repository contains demo playbooks and roles used in our Ansible Workshops.
37
star
60

ansible.github.com

nothing to see here, this just makes ansible.github.com/io a redirect to the main project page
JavaScript
35
star
61

ansible-blog-examples

Example playbooks from posts on the Ansible blog (https://www.ansible.com/blog)
Python
34
star
62

eda-server-operator

Jinja
34
star
63

eda-server-prototype

Python
33
star
64

awx-facts-playbooks

Repository containing playbooks to support fact scanning in Ansible Tower and AWX.
Python
33
star
65

ansible-risk-insight

Ansible Risk Insight (ARI) is the tool to evaluate the quality and risk of the ansible content.
Python
32
star
66

galaxy-lint-rules

Ansible Lint rules used by Galaxy and Mazer to evaluate Ansible content
Python
29
star
67

tower-example

Ansible Tower Example Playbooks
28
star
68

ansible-lightspeed

This repository is no longer in use. The Ansible Lightspeed with IBM watsonx Code Assistant product documentation can be found at https://docs.redhat.com/en/documentation/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant.
28
star
69

ansible-runner-http

Python
28
star
70

distro-test-containers

Distribution specific containers for Ansible integration testing.
Dockerfile
27
star
71

galaxy-importer

Galaxy content importer
Python
26
star
72

project-config

Zuul configuration files for the Ansible tenant
Python
25
star
73

awx-logos

Less
25
star
74

role-install-gcloud

Install Google Cloud SDK and Kubernetes kubectl CLI.
Shell
24
star
75

ansible-zuul-jobs

Zuul job definitions for the Ansible tenant.
Python
23
star
76

ansible-sdk

The Ansible SDK
Python
23
star
77

azure-testing

Former home for Ansible Azure module testing. Testing is now part of the main Ansible repository.
21
star
78

network-infra-playbooks

Playbooks and roles for installing and managing Ansible networking CI
Shell
21
star
79

ansible-policy

ansible-policy is a prototype implementation which allows us to define and set constraints to the Ansible project in OPA Rego language.
Python
21
star
80

galaxy-issues

This repository exists solely for the tracking of user issues with Ansible Galaxy.
20
star
81

vcenter-test-container

vCenter simulator container for testing.
Python
20
star
82

docsite

Static HTML and assets for docs.ansible.com
HTML
19
star
83

ansible-content-actions

Combine GitHub Actions to create a streamlined workflow for testing Ansible collection repositories on GitHub.
19
star
84

django-gulp-nginx

Django + PostgreSQL + Nginx with Gulp-built static assets framework, managed with Ansible Container
JavaScript
19
star
85

aap-docs

Asciidoc technical content for Ansible Automation Platform
19
star
86

terraform-provider-aap

Terraform Provider for Ansible Automation Platform
Go
18
star
87

pinakes

Python
18
star
88

ansible_tower_client_ruby

Ruby gem for the Ansible Tower REST API
Ruby
18
star
89

ansible-compat

A python package containing functions that help interacting with various versions of Ansible
Python
18
star
90

community-docs

docs.ansible.com/community
18
star
91

ambassadors

A repository of useful materials for Ansible Ambassadors around the world.
17
star
92

team-devtools

Shared practices, workflows and decisions impacting Ansible devtools projects
Dockerfile
17
star
93

test-network-modules

Playbooks for testing Ansible core network modules
JavaScript
17
star
94

ansible-dev-environment

Build and maintain a development environment including ansible collections and their python dependencies
Python
17
star
95

docker-testing

New Docker modules.
Shell
17
star
96

network

Ansible collection for network devices
16
star
97

tower-nagios-integration

Scripts and documentation related to the integration of Ansible Tower with Nagios.
Python
15
star
98

django-template

A Django project template for Ansible Container
Python
15
star
99

logos

Ansible upstream logos
Shell
14
star
100

nginx-container

Add an nginx service to your Ansible Container project
Python
14
star