• Stars
    star
    111
  • Rank 314,510 (Top 7 %)
  • Language
    Python
  • License
    Mozilla Public Li...
  • Created about 7 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Claranet France Terraform Wrapper

claranet-tfwrapper

Changelog Mozilla Public License Pypi

tfwrapper is a python wrapper for Terraform which aims to simplify Terraform usage and enforce best practices.

Table Of Contents

Features

  • Terraform behaviour overriding
  • State centralization enforcement
  • Standardized file structure
  • Stack initialization from templates
  • AWS credentials caching
  • Azure credentials loading (both Service Principal or User)
  • GCP and GKE user ADC support
  • Plugins caching
  • Tab completion

Drawbacks

  • AWS oriented (even if other cloud providers do work)
  • Setup overhead

Setup Dependencies

  • build-essential (provides C/C++ compilers)
  • libffi-dev
  • libssl-dev
  • python3 >= 3.8.1 <4.0
  • python3-dev
  • python3-venv

Runtime Dependencies

  • terraform >= 0.10 (>= 0.15 for fully working Azure backend with isolation due to hashicorp/terraform#25416)
  • azure-cli when using context based Azure authentication

Recommended setup

  • Terraform 1.0+
  • An AWS S3 bucket and DynamoDB table for state centralization in AWS.
  • An Azure Blob Storage container for state centralization in Azure.

Installation

tfwrapper should installed using pipx (recommended) or pip:

pipx install claranet-tfwrapper

Setup command-line completion

Add the following to your shell's interactive configuration file, e.g. .bashrc for bash:

eval "$(register-python-argcomplete tfwrapper -e tfwrapper)"

You can then press the completion key (usually Tab β†Ή) twice to get your partially typed tfwrapper commands completed.

Note: the -e tfwrapper parameter adds an suffix to the defined _python_argcomplete function to avoid clashes with other packages (see kislyuk/argcomplete#310 (comment) for context).

Upgrade from tfwrapper v7 or older

If you used versions of the wrapper older than v8, there is not much to do when upgrading to v8 except a little cleanup. Indeed, the wrapper is no longer installed as a git submodule of your project like it used to be instructed and there is no longer any Makefile to activate it.

Just clean up each project by destroying the .wrapper submodule:

git rm -f Makefile
git submodule deinit .wrapper
rm -rf .git/modules/.wrapper
git rm -f .wrapper

Then check the staged changes and commit them.

Required files

tfwrapper expects multiple files and directories at the root of a project.

conf

Stacks configurations are stored in the conf directory.

templates

The templates directory is used to store the state backend configuration template and the Terraform stack templates used to initialize new stacks. Using a git submodule is recommended.

The following files are required:

  • templates/{provider}/common/state.tf.jinja2: AWS S3 or Azure Blob Storage state backend configuration template.
  • templates/{provider}/basic/main.tf: the default Terraform configuration for new stacks. The whole template/{provider}/basic directory is copied on stack initialization.

For example with AWS:

mkdir -p templates/aws/common templates/aws/basic

# create state configuration template with AWS backend
cat << 'EOF' > templates/aws/common/state.tf.jinja2
{% if region is not none %}
{% set region = '/' + region + '/' %}
{% else %}
{% set region = '/' %}
{% endif %}

terraform {
  backend "s3" {
    bucket = "my-centralized-terraform-states-bucket"
    key    = "{{ client_name }}/{{ account }}/{{ environment }}{{ region }}{{ stack }}/terraform.state"
    region = "eu-west-1"

    dynamodb_table = "my-terraform-states-lock-table"
  }
}

resource "null_resource" "state-test" {}
EOF

# create a default stack templates with support for AWS assume role
cat << 'EOF' > templates/aws/basic/main.tf
provider "aws" {
  region     = var.aws_region
  access_key = var.aws_access_key
  secret_key = var.aws_secret_key
  token      = var.aws_token
}
EOF

For example with Azure:

mkdir -p templates/azure/common templates/azure/basic

# create state configuration template with Azure backend
cat << 'EOF' > templates/azure/common/state.tf.jinja2
{% if region is not none %}
{% set region = '/' + region + '/' %}
{% else %}
{% set region = '/' %}
{% endif %}

terraform {
  backend "azurerm" {
    subscription_id      = "00000000-0000-0000-0000-000000000000"
    resource_group_name  = "my-resource-group"
    storage_account_name = "my-centralized-terraform-states-account"
    container_name       = "terraform-states"

    key = "{{ client_name }}/{{ account }}/{{ environment }}{{ region }}{{ stack }}/terraform.state"
  }
}
EOF

# create a default stack templates with support for Azure credentials
cat << 'EOF' > templates/azure/basic/main.tf
provider "azurerm" {
  subscription_id = var.azure_subscription_id
  tenant_id       = var.azure_tenant_id
}
EOF

.run

The .run directory is used for credentials caching and plan storage.

mkdir .run
cat << 'EOF' > .run/.gitignore
*
!.gitignore
EOF

.gitignore

Adding the following .gitignore at the root of your project is recommended:

cat << 'EOF' > .gitignore
.terraform
terraform.tfstate
terraform.tfstate.backup
terraform.tfvars
EOF

Configuration

tfwrapper uses yaml files stored in the conf directory of the project.

tfwrapper configuration

tfwrapper uses some default behaviors that can be overridden or modified via a config.yml file in the conf directory.

---
always_trigger_init: False # Always trigger `terraform init` first when launching `plan` or `apply` commands
pipe_plan_command: "cat" # Default command used when you're invoking tfwrapper with `--pipe-plan`
use_local_azure_session_directory: False # Use the current user's Azure configuration in `~/.azure`. By default, the wrapper uses a local `azure-cli` session and configuration in the local `.run` directory.

Stacks configurations

Stacks configuration files use the following naming convention:

conf/${account}_${environment}_${region}_${stack}.yml

Here is an example for an AWS stack configuration:

---
state_configuration_name: "aws" # use "aws" backend state configuration
aws:
  general:
    account: &aws_account "xxxxxxxxxxx" # aws account for this stack
    region: &aws_region eu-west-1 # aws region for this stack
  credentials:
    profile: my-aws-profile # should be configured in .aws/config

terraform:
  vars: # variables passed to terraform
    aws_account: *aws_account
    aws_region: *aws_region
    client_name: my-client-name # arbitrary client name
    version: "1.0" # Terraform version that tfwrapper will automatically download if it's not present, and use for this stack.

Here is an example for a stack on Azure configuration using user mode and AWS S3 backend for state storage:

---
state_configuration_name: "aws-demo" # use "aws" backend state configuration
azure:
  general:
    mode: user # Uses personal credentials with MFA
    directory_id: &directory_id "00000000-0000-0000-0000-000000000000" # Azure Tenant/Directory UID
    subscription_id: &subscription_id "11111111-1111-1111-1111-111111111111" # Azure Subscription UID

terraform:
  vars:
    subscription_id: *subscription_id
    directory_id: *directory_id
    client_name: client-name #Replace it with the name of your client
    #version: "0.10"  # Terraform version like "0.10" or "0.10.5" - optional

It is using your account linked to a Microsoft Account. You must have access to the Azure Subscription if you want to use Terraform.

Here is an example for a stack on Azure configuration using Service Principal mode:

---
azure:
  general:
    mode: service_principal # Uses an Azure tenant Service Principal account
    directory_id: &directory_id "00000000-0000-0000-0000-000000000000" # Azure Tenant/Directory UID
    subscription_id: &subscription_id "11111111-1111-1111-1111-111111111111" # Azure Subscription UID

    credentials:
      profile: customer-profile # To stay coherent, create an AzureRM profile with the same name as the account-alias. Please checkout `azurerm_config.yml.sample` file for configuration structure.

terraform:
  vars:
    subscription_id: *subscription_id
    directory_id: *directory_id
    client_name: client-name # Replace it with the name of your client
    #version: "0.10"  # Terraform version like "0.10" or "0.10.5" - optional

The wrapper uses the Service Principal's credentials to connect the Azure subscription. The given Service Principal must have access to the subscription. The wrapper loads client_id, client_secret and tenant_id properties from your config.yml file located in ~/.azurerm/config.yml.

~/.azurerm/config.yml file structure example:

---
claranet-sandbox:
  client_id: aaaaaaaa-bbbb-cccc-dddd-zzzzzzzzzzzz
  client_secret: AAbbbCCCzzz==
  tenant_id: 00000000-0000-0000-0000-000000000000

customer-profile:
  client_id: aaaaaaaa-bbbb-cccc-dddd-zzzzzzzzzzzz
  client_secret: AAbbbCCCzzz==
  tenant_id: 00000000-0000-0000-0000-000000000000

Here is an example for a GCP/GKE stack with user ADC and multiple GKE instances:

---
gcp:
  general:
    mode: adc-user
    project: &gcp_project project-name
  gke:
    - name: kubernetes-1
      zone: europe-west1-c
    - name: kubernetes-2
      region: europe-west1

terraform:
  vars:
    gcp_region: europe-west1
    gcp_zone: europe-west1-c
    gcp_project: *gcp_project
    client_name: client-name
    #version: "0.11"  # Terraform version like "0.10" or "0.10.5" - optional

You can declare multiple providers configurations, context is set up accordingly.

⚠ This feature is only supported for Azure stacks for now and only works with Azure authentication isolation

---
azure:
  general:
    mode: service_principal # Uses an Azure tenant Service Principal account
    directory_id: &directory_id "00000000-0000-0000-0000-000000000000" # Azure Tenant/Directory UID
    subscription_id: &subscription_id "11111111-1111-1111-1111-111111111111" # Azure Subscription UID

    credentials:
      profile: customer-profile # To stay coherent, create an AzureRM profile with the same name as the account-alias. Please checkout `azurerm_config.yml.sample` file for configuration structure.

  alternative:
    mode: service_principal # Uses an Azure tenant Service Principal account
    directory_id: "00000000-0000-0000-0000-000000000000" # Azure Tenant/Directory UID
    subscription_id: "22222222-2222-2222-2222-222222222222" # Azure Subscription UID

    credentials:
      profile: claranet-sandbox # To stay coherent, create an AzureRM profile with the same name as the account-alias. Please checkout `azurerm_config.yml.sample` file for configuration structure.

terraform:
  vars:
    subscription_id: *subscription_id
    directory_id: *directory_id
    client_name: client-name # Replace it with the name of your client
    #version: "0.10"  # Terraform version like "0.10" or "0.10.5" - optional

This configuration is useful when having various service principals with a dedicated rights scope for each.

The wrapper will generate the following Terraform variables that can be used in your stack:

  • <config_name>_azure_subscription_id with Azure subscription ID. From the example, variable is: alternative_subscription_id = "22222222-2222-2222-2222-222222222222"
  • <config_name>_azure_tenant_id with Azure tenant ID. From the example, variable is: alternative_tenant_id = "00000000-0000-0000-0000-000000000000"
  • <config_name>_azure_client_id with Service Principal client id. From the example, variable is: alternative_client_id = "aaaaaaaa-bbbb-cccc-dddd-zzzzzzzzzzzz"
  • <config_name>_azure_client_secret with Service Principal client secret. From the example, variable is: alternative_client_secret = "AAbbbCCCzzz=="

Also, an isolation context is set to the local .run/aure_<config_name> directory for each configuration.

States centralization configuration

The conf/state.yml configuration file defines the configurations used to connect to state backend account. It can be an AWS (S3) or Azure (Storage Account) backend type.

You can use other backends (e.g. Google GCS or Hashicorp Consul) not specifically supported by the wrapper if you them manage yourself and omit the conf/state.yml file or make it empty:

---

Example configuration with both AWS and Azure backends defined:

---
aws:
  - name: "aws-demo"
    general:
      account: "xxxxxxxxxxx"
      region: eu-west-1
    credentials:
      profile: my-state-aws-profile # should be configured in .aws/config
azure:
  # This backend use storage keys for authentication
  - name: "azure-backend"
    general:
      subscription_id: "xxxxxxx" # the Azure account to use for state storage
      resource_group_name: "tfstates-xxxxx-rg" # The Azure resource group with state storage
      storage_account_name: "tfstatesxxxxx"
  - name: "azure-alternative"
    general:
      subscription_id: "xxxxxxx" # the Azure account to use for state storage
      resource_group_name: "tfstates-xxxxx-rg" # The Azure resource group with state storage
      storage_account_name: "tfstatesxxxxx"
  # This backend use Azure AD authentication
  - name: "azure-ad-auth"
    general:
      subscription_id: "xxxxxxx" # the Azure account to use for state storage
      resource_group_name: "tfstates-xxxxx-rg" # The Azure resource group with state storage
      storage_account_name: "tfstatesxxxxx"
      azuread_auth: true

backend_parameters: # Parameters or options which can be used by `state.j2.tf` template file
  state_snaphot: "false" # Example of Azure storage backend option

Note: the first backend will be the default one for stacks not defining state_backend_type.

How to migrate from one backend to another for state centralization

If for example you have both an AWS and Azure state backend configured in your conf/state.yml file, you can migrate your stack state from one backend to another.

Here is a quick howto:

  1. Make sure your stack is clean:
$ cd account/path/env/your_stack
$ tfwrapper init
$ tfwrapper plan
# should return no changes
  1. Change your backend in the stack configuration yaml file:
---
#state_configuration_name: 'aws-demo' # previous backend
state_configuration_name: "azure-alternative" # new backend to use
  1. Back in your stack directory, you can perform the change:
$ cd account/path/env/your_stack
$ rm -v state.tf # removing old state backend configuration
$ tfwrapper bootstrap # regen a new state backend configuration based on the stack yaml config file
$ tfwrapper init # Terraform will detect the new backend and propose to migrate it
$ tfwrapper plan
# should return the same changes diff as before

Stacks file structure

Terraform stacks are organized based on their:

  • account: an account alias which may reference one or multiple providers accounts. aws-production, azure-dev, etc…
  • environment: production, preproduction, dev, etc…
  • region: eu-west-1, westeurope, global, etc…
  • stack: defaults to default. web, admin, tools, etc…

The following file structure is enforced:

# enforced file structure
└── account
    └── environment
        └── region
            └── stack

# real-life example
β”œβ”€β”€ aws-account-1
β”‚   └── production
β”‚       β”œβ”€β”€ eu-central-1
β”‚       β”‚   └── web
β”‚       β”‚       └── main.tf
β”‚       └── eu-west-1
β”‚           β”œβ”€β”€ default
β”‚           β”‚   └── main.t
β”‚           └── tools
β”‚               └── main.tf
└── aws-account-2
    └── backup
        └── eu-west-1
            └── backup
                └── main.tf

Usage

Stack bootstrap

After creating a conf/${account}_${environment}_${region}_${stack}.yml stack configuration file you can bootstrap it.

# you can bootstrap using the templates/{provider}/basic stack
tfwrapper -a ${account} -e ${environment} -r ${region} -s ${stack} bootstrap

# or another stack template, for example: templates/aws/foobar
tfwrapper -a ${account} -e ${environment} -r ${region} -s ${stack} bootstrap aws/foobar

# or from an existent stack, for example: customer/env/region/stack
tfwrapper -a ${account} -e ${environment} -r ${region} -s ${stack} bootstrap mycustomer/dev/eu-west/run

Working on stacks

You can work on stacks from their directory or from the root of the project.

# working from the root of the project
tfwrapper -a ${account} -e ${environment} -r ${region} -s ${stack} plan

# working from the root of a stack
cd ${account}/${environment}/${region}/${stack}
tfwrapper plan

You can also work on several stacks sequentially with the foreach subcommand from any directory under the root of the project. By default, foreach selects all stacks under the current directory, so if called from the root of the project without any filter, it will select all stacks and execute the specified command in them, one after another:

# working from the root of the project
tfwrapper foreach -- tfwrapper init

Any combination of the -a, -e, -r and -s arguments can be used to select specific stacks, e.g. all stacks for an account across all environments but in a specific region:

# working from the root of the project
tfwrapper -a ${account} -r ${region} foreach -- tfwrapper plan

The same can be achieved with:

# working from an account directory
cd ${account}
tfwrapper -r ${region} foreach -- tfwrapper plan

Complex commands can be executed in a sub-shell with the -S/--shell argument, e.g.:

# working from an environment directory
cd ${account}/${environment}
tfwrapper foreach -S 'pwd && tfwrapper init >/dev/null 2>&1 && tfwrapper plan 2>/dev/null -- -no-color | grep "^Plan: "'

Passing options

You can pass anything you want to terraform using --.

tfwrapper plan -- -target resource1 -target resource2

Environment

tfwrapper sets the following environment variables.

S3 state backend credentials

The default AWS credentials of the environment are set to point to the S3 state backend. Those credentials are acquired from the profile defined in conf/state.yml

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • AWS_SESSION_TOKEN

Azure Service Principal credentials

Those AzureRM credentials are loaded only if you are using the Service Principal mode. They are acquired from the profile defined in ~/.azurerm/config.yml

  • ARM_CLIENT_ID
  • ARM_CLIENT_SECRET
  • ARM_TENANT_ID

Azure authentication isolation

AZURE_CONFIG_DIR environment variable is set to the local .run/azure directory if global configuration value use_local_azure_session_directory is set to true, which is the default, which is the default.

If you have multiple configurations in your stacks, you also have <CONFIG_NAME>_AZURE_CONFIG_DIR which is set to the local .run/azure_<config_name> directory.

GCP configuration

Those GCP related variables are available from the environment when using the example configuration:

  • TF_VAR_gcp_region
  • TF_VAR_gcp_zone
  • TF_VAR_gcp_project

GKE configurations

Each GKE instance has its own kubeconfig, the path to each configuration is available from the environment:

  • TF_VAR_gke_kubeconfig_${gke_cluster_name}

kubeconfig is automatically fetched by the wrapper (using gcloud) and stored inside the .run directory of your project. It is refreshed automatically at every run to ensure you point to correct Kubernetes endpoint. You can disable this behaviour by setting refresh_kubeconfig: never in your cluster settings.

---
gcp:
  general:
    mode: adc-user
    project: &gcp_project project-name
  gke:
    - name: kubernetes-1
      zone: europe-west1-c
      refresh_kubeconfig: never

Stack configurations and credentials

The terraform['vars'] dictionary from the stack configuration is accessible as Terraform variables.

The profile defined in the stack configuration is used to acquire credentials accessible from Terraform. There is two supported providers, the variables which will be loaded depends on the used provider.

  • TF_VAR_client_name (if set in .yml stack configuration file)
  • TF_VAR_aws_account
  • TF_VAR_aws_region
  • TF_VAR_aws_access_key
  • TF_VAR_aws_secret_key
  • TF_VAR_aws_token
  • TF_VAR_azurerm_region
  • TF_VAR_azure_region
  • TF_VAR_azure_subscription_id
  • TF_VAR_azure_tenant_id
  • TF_VAR_azure_state_access_key (removed in v11.0.0)

Stack path

The stack path is passed to Terraform. This is especially useful for resource naming and tagging.

  • TF_VAR_account
  • TF_VAR_environment
  • TF_VAR_region
  • TF_VAR_stack

Development

Tests

All new code contributions should come with unit and/or integrations tests.

To run those tests locally, use tox:

poetry run tox -e py

Linters are also used to ensure code respects our standards.

To run those linters locally:

poetry run tox -e lint

Debug command-line completion

You can get verbose debugging information for argcomplete by defining the following environment variable:

export _ARC_DEBUG=1

Python code formatting

Our code is formatted with black.

Make sure to format all your code contributions with black ${filename}.

Hint: enable auto-format on save with black in your favorite IDE.

Checks

To run code and documentation style checks, run tox -e lint.

In addition to black --check, code is also checked with:

README TOC

This README's table of content is formatted with md_toc.

Keep in mind to update it with md_toc --in-place github README.md.

Using terraform development builds

To build and use development versions of terraform, manually put them in a ~/.terraform.d/versions/X.Y/X.Y.Z-dev/ folder:

# cd ~/go/src/github.com/hashicorp/terraform
# make XC_ARCH=amd64 XC_OS=linux bin
# ./bin/terraform version
Terraform v0.12.9-dev
# mkdir -p ~/.terraform.d/versions/0.12/0.12.9-dev
# mv ./bin/terraform ~/.terraform.d/versions/0.12/0.12.9-dev/

git pre-commit hooks

Some git pre-commit hooks are configured in .pre-commit-config.yaml for use with the pre-commit tool.

Using them helps avoiding to push changes that will fail the CI.

They can be installed locally with:

# pre-commit install

If updating hooks configuration, run checks against all files to make sure everything is fine:

# pre-commit run --all-files --show-diff-on-failure

Note: the pre-commit tool itself can be installed with pip or pipx.

Review and merge open Dependabot PRs

Use the scripts/merge-dependabot-mrs.sh script from master branch to:

  • list open Dependabot PRs that are mergeable,
  • review, approve and merge them,
  • pull changes from github and pushing them to origin.

Just invoke the script without any argument:

# ./scripts/merge-dependabot-mrs.sh

Check the help:

# ./scripts/merge-dependabot-mrs.sh --help

Tagging and publishing new releases to PyPI

Use the scripts/release.sh script from master branch to:

  • bump the version with poetry,
  • update CHANGELOG.md,
  • commit these changes,
  • tag with last CHANGELOG.md item content as annotation,
  • bump the version with poetry again to mark it for development,
  • commit this change,
  • push all commits and tags to all remote repositories.

This will trigger a Github Actions job to publish packages to PyPI.

To invoke the script, pass it the desired bump rule, e.g.:

# ./scripts/release.sh minor

For more options, check the help:

# ./scripts/release.sh --help

More Repositories

1

terraform-aws-lambda

Terraform module for AWS Lambda functions
HCL
160
star
2

terraform-datadog-monitors

Manage Datadog monitors with terraform dedicated modules.
HCL
91
star
3

aws-inventory-graph

Explore your AWS platform with, Dgraph, a graph database.
Go
64
star
4

terraform-azurerm-regions

Terraform module to handle Azure Regions
HCL
61
star
5

terraform-aws-aurora

Terraform module for creating and managing Amazon Aurora clusters
HCL
56
star
6

sshm

Easy connect on EC2 instances thanks to AWS System Manager Agent. Just use your `~/.aws/profile` to easily select the instance you want to connect on.
Go
55
star
7

terraform-azurerm-aks

Terraform module composition (feature) for Azure Kubernetes Service
HCL
45
star
8

terraform-azurerm-app-gateway

Terraform module for Azure Application Gateway
HCL
37
star
9

python-terrafile

Manages external Terraform modules
Python
36
star
10

jinjaform

Terraform wrapper with Jinja2 templates
Python
34
star
11

centos7-ami

Shell script to build CentOS 7 AMI
Shell
34
star
12

puppet-consul_template

A Puppet module to manage the config and jobs of Consul Template from Hashicorp
Ruby
30
star
13

ansible-gendoc

Auto generate Ansible documentation
Python
26
star
14

ssha

SSH into AWS EC2 instances
Python
25
star
15

terraform-azurerm-keyvault

Terraform module composition (feature) for Azure KeyVault
HCL
23
star
16

terraform-signalfx-detectors

Collection of terraform modules for SignalFx detectors.
HCL
22
star
17

terraform-aws-ssm-patch-management

Terraform module for AWS SSM Patch Management
HCL
22
star
18

nagitheus

Nagios Check towards Prometheus
Go
20
star
19

cloud-deploy

Claranet Cloud Deploy
Python
19
star
20

terraform-azurerm-windows-vm

Terraform module composition (feature) for ARM Windows Virtual Machine (VM)
HCL
18
star
21

terraform-aws-lets-encrypt

Terraform module for creating Let's Encrypt certificates with AWS Lambda and Route 53
Python
17
star
22

terraform-azurerm-app-service

Terraform module composition (feature) for Azure App Service (Service Plan + WebApp)
HCL
16
star
23

terraform-azurerm-api-management

Terraform module for Azure API Management
HCL
16
star
24

spryker-demoshop

Containerized demoshop based on the Spryker Commerce OS
PHP
16
star
25

terraform-azurerm-function-app

Terraform module for Azure Function App
HCL
15
star
26

terraform-azurerm-db-sql

Terraform module composition (feature) for Azure SQL Database (SQLServer based)
HCL
14
star
27

terraform-azurerm-linux-vm

Terraform module composition (feature) for ARM Linux Virtual Machine (VM)
HCL
14
star
28

terraform-aws-s3-yum-repo

Manages a YUM repository in an S3 bucket
HCL
13
star
29

aps

Easy switch between AWS Profiles and Regions
Go
13
star
30

ansible-role-motd

Install and configure dynamic MOTD and SSH banner
Jinja
12
star
31

php

PHP docker base image
Shell
12
star
32

terraform-aws-asg-instance-alarms

Manages CloudWatch Alarms for EC2 Instances in ASGs
Python
12
star
33

terraform-azurerm-cdn-frontdoor

Terraform module for Azure CDN FrontDoor (Standard/Premium)
HCL
12
star
34

spryker-base

Common container build/run infrastructure for shops based on Spryker commerce OS
Shell
11
star
35

ansible-role-log4shell

Find Log4Shell CVE-2021-44228 on your system
Jinja
11
star
36

gcloud-kubectl-docker

Everything you need to interact with GCP, k8s, docker
Shell
11
star
37

terraform-azurerm-storage-account

Terraform module for Azure Storage
HCL
11
star
38

terraform-azurerm-db-postgresql-flexible

Terraform module composition (feature) for Azure PostGreSQL Flexible Database
HCL
10
star
39

terraform-azurerm-rg

Terraform module for Azure Resource Group
HCL
10
star
40

nutanix-exporter

Go
10
star
41

zerto-exporter

Prometheus Exporter for Zerto
Go
9
star
42

terraform-aws-vpc-modules

Terraform modules for AWS VPC management
HCL
9
star
43

terraform-azurerm-subnet

Terraform module for Azure virtual networks subnets
HCL
9
star
44

terraform-aws-asg-instance-replacement

Terraform module for AWS ASG instance replacement
Python
9
star
45

kcs

Select which kubeconfig.yaml to use in an easy way. KCS means kubeconfig switcher.
Go
9
star
46

windows-audit

Scripts for auditing Windows Server 2003+ servers, and turning the output data into a usable format.
PowerShell
8
star
47

ctf-toolkit

A toolkit for CTF challenges
8
star
48

terraform-aws-cloudwatch-slack

Terraform module for sending CloudWatch Alarm events to Slack
HCL
8
star
49

terraform-azurerm-tagging

Terraform module for Resources Tagging
HCL
8
star
50

rubrik-exporter

Rubrik metrics exporter for Prometheus
Go
8
star
51

iamdump

Like tcpdump for AWS IAM policies
Python
7
star
52

puppet-varnish

Puppet module to install and configure Varnish cache
Ruby
7
star
53

terraform-azurerm-storage-sas-token

Terraform module for Azure Storage SAS Token access
HCL
7
star
54

terraform-azurerm-virtual-wan

Terraform module for Azure Virtual WAN
HCL
7
star
55

terraform-azurerm-app-service-web

[Deprecated] Terraform module for Azure App Service Web
HCL
7
star
56

puppet-timezone

Basic Puppet module to set Linux timezone
Ruby
6
star
57

terraform-azurerm-nsg

Terraform module for Azure Network Security Group
HCL
6
star
58

sqldumper

A small SQLDUMPER Container to run as cron
Python
6
star
59

terraform-azurerm-alerting

Terraform module for Azure Alerting
HCL
6
star
60

terraform-path-hash

Terraform module for hashing the contents of a path
Python
5
star
61

terraform-aws-packer-cleanup

Terraform module to clean up Packer AWS resources
Python
5
star
62

terraform-azurerm-aci

Terraform module for Azure Container Instances group
HCL
5
star
63

terraform-azurerm-run

Terraform module composition (feature) to setup Claranet MSP Azure tools
HCL
5
star
64

go-s3-describe

A tool to list all S3 Buckets of an AWS account with their main statistics. Buckets are sorted by size.
Go
5
star
65

terraform-aws-ssh-keys

Terraform module for managing SSH keys
Python
5
star
66

terraform-azurerm-eventhub

Terraform module for Azure Eventhub
HCL
5
star
67

s3undelete

Utility to undelete deleted files in a versioned S3 bucket
Go
5
star
68

terraform-azurerm-db-postgresql

[Deprecated] Terraform module composition (feature) for Azure PostGreSQL Database
HCL
5
star
69

terraform-datadog-scripts

Scripts used for datadog terraform modules for CI and compliant purpose.
Shell
5
star
70

casper

Claranet Cloud Deploy - CLI tool
Python
5
star
71

zabbix-aws-deregister

Zabbix Deregister for AWS AutoScaling
Go
5
star
72

motd

Claranet Unix Motd
Shell
5
star
73

ansible-role-mariadb

Install and configure MariaDB
Shell
5
star
74

terraform-azurerm-app-service-plan

Terraform module for Azure Service Plan
HCL
4
star
75

terraform-azurerm-cosmos-db

Terraform module for CosmosDB account and databases
HCL
4
star
76

terraform-azurerm-dashboard

Terraform module for Azure Dashboard
HCL
4
star
77

cloud-deploy-docs

Claranet Cloud Deploy - Documentation
Python
4
star
78

terraform-aws-ecs-service-pipeline

Create ECS services and deploy to them with CodePipeline
HCL
4
star
79

ansible-role-satisfactory

Install and configure Satisfactory dedicated server
Python
4
star
80

claranet-azure-pre-configuration

Claranet Azure pre-configuration script
Shell
4
star
81

clara-coin-slack-command

Slack bot hosted on aws lambda for the Clara Coin initiative
JavaScript
4
star
82

ansible-role-users

Configure system's users
Python
4
star
83

terraform-azurerm-firewall

Terraform module for Azure Firewall
HCL
4
star
84

ansible-role-certbot

Install and manage certbot
Jinja
4
star
85

terraform-aws-alb-cloudwatch-logs-json

Terraform module for shipping AWS ALB logs to CloudWatch Logs in JSON format
Python
4
star
86

cookiecutter-ansible-role

Cookiecutter for Ansible role
Jinja
4
star
87

terraform-datadog-integrations

Manage Datadog integrations with terraform dedicated modules.
HCL
4
star
88

terraform-azurerm-run-common

[Deprecated] Terraform module composition (feature) to setup Claranet MSP Azure common tools
HCL
4
star
89

ansible-role-composer

Install PHP Composer on the target
Jinja
3
star
90

terraform-azurerm-front-door

[Deprecated] Terraform module for Azure Front Door (classic)
HCL
3
star
91

graylog-exporter

Go
3
star
92

terraform-aws-asg-pipeline

Create auto scaling groups and deploy to them with CodePipeline
HCL
3
star
93

terraform-provider-clouddeploy

Cloud Deploy - Terraform provider
Go
3
star
94

terraform-azurerm-avd

Terraform module for Azure Virtual Desktop
HCL
3
star
95

terraform-azurerm-lb

Terraform module for Azure Load Balancer
HCL
3
star
96

terraform-azurerm-data-factory

Terraform module for Azure Data Factory
HCL
3
star
97

dockerfiles

Various Dockerfiles used for CI.
Dockerfile
3
star
98

terraform-azurerm-run-iaas

[Deprecated] Terraform module composition (feature) to setup Claranet MSP Azure IaaS/VM tools
HCL
3
star
99

terraform-azurerm-eventgrid

Terraform module for Azure Eventgrid
HCL
3
star
100

terraform-azurerm-aks-light

Terraform module for Azure Kubernetes Service
HCL
3
star