• Stars
    star
    464
  • Rank 94,450 (Top 2 %)
  • Language HCL
  • License
    MIT License
  • Created about 7 years ago
  • Updated almost 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Terraform module for AWS GitLab runners on ec2 (spot) instances

Terraform registry Gitter Actions

Terraform module for GitLab auto scaling runners on AWS spot instances

The module

This Terraform modules creates a GitLab CI runner. A blog post describes the original version of the runner. See the post at 040code. The original setup of the module is based on the blog post: Auto scale GitLab CI runners and save 90% on EC2 costs.

💥 BREAKING CHANGE AHEAD: Version 7 of the module rewrites the whole variable section to

  • harmonize the variable names
  • harmonize the documentation
  • remove deprecated variables
  • gain a better overview of the features provided

And it also adds

  • all possible Docker settings
  • the idle_scale_factor

We know that this is a breaking change causing some pain, but we think it is worth it. We hope you agree. And to make the transition as smooth as possible, we have added a migration script to the migrations folder. It will cover almost all cases, but some minor rework might still be possible.

Checkout issue 819

The runners created by the module use spot instances by default for running the builds using the docker+machine executor.

  • Shared cache in S3 with life cycle management to clear objects after x days.
  • Logs streamed to CloudWatch.
  • Runner agents registered automatically.

The name of the runner agent and runner is set with the overrides variable. Adding an agent runner name tag does not work.

# ...
overrides  = {
  name_sg                     = ""
  name_runner_agent_instance  = "Gitlab Runner Agent"
  name_docker_machine_runners = "Gitlab Runner Terraform"
  name_iam_objects = "gitlab-runner"
}

//this doesn't work
agent_tags = merge(local.my_tags, map("Name", "Gitlab Runner Agent"))

The runner supports 3 main scenarios:

GitLab CI docker-machine runner - one runner agent

In this scenario the runner agent is running on a single EC2 node and runners are created by docker machine using spot instances. Runners will scale automatically based on the configuration. The module creates a S3 cache by default, which is shared across runners (spot instances).

runners-default

GitLab CI docker-machine runner - multiple runner agents

In this scenario the multiple runner agents can be created with different configuration by instantiating the module multiple times. Runners will scale automatically based on the configuration. The S3 cache can be shared across runners by managing the cache outside of the module.

runners-cache

GitLab Ci docker runner

In this scenario not docker machine is used but docker to schedule the builds. Builds will run on the same EC2 instance as the agent. No auto scaling is supported.

runners-docker

Prerequisites

Terraform

Ensure you have Terraform installed. The modules is based on Terraform 0.11, see .terraform-version for the used version. A handy tool to mange your Terraform version is tfenv.

On macOS it is simple to install tfenv using brew.

brew install tfenv

Next install a Terraform version.

tfenv install <version>

AWS

Ensure you have setup your AWS credentials. The module requires access to IAM, EC2, CloudWatch, S3 and SSM.

JQ & AWS CLI

In order to be able to destroy the module, you will need to run from a host with both jq and aws installed and accessible in the environment.

On macOS it is simple to install them using brew.

brew install jq awscli

Service linked roles

The GitLab runner EC2 instance requires the following service linked roles:

  • AWSServiceRoleForAutoScaling
  • AWSServiceRoleForEC2Spot

By default the EC2 instance is allowed to create the required roles, but this can be disabled by setting the option allow_iam_service_linked_role_creation to false. If disabled you must ensure the roles exist. You can create them manually or via Terraform.

resource "aws_iam_service_linked_role" "spot" {
  aws_service_name = "spot.amazonaws.com"
}

resource "aws_iam_service_linked_role" "autoscaling" {
  aws_service_name = "autoscaling.amazonaws.com"
}

KMS keys

If a KMS key is set via kms_key_id, make sure that you also give proper access to the key. Otherwise, you might get errors, e.g. the build cache can't be decrypted or logging via CloudWatch is not possible. For a CloudWatch example checkout kms-policy.json

GitLab runner token configuration

By default the runner is registered on initial deployment. In previous versions of this module this was a manual process. The manual process is still supported but will be removed in future releases. The runner token will be stored in the AWS SSM parameter store. See example for more details.

To register the runner automatically set the variable gitlab_runner_registration_config["registration_token"]. This token value can be found in your GitLab project, group, or global settings. For a generic runner you can find the token in the admin section. By default the runner will be locked to the target project, not run untagged. Below is an example of the configuration map.

gitlab_runner_registration_config = {
  registration_token = "<registration token>"
  tag_list           = "<your tags, comma separated>"
  description        = "<some description>"
  locked_to_project  = "true"
  run_untagged       = "false"
  maximum_timeout    = "3600"
  # ref_protected runner will only run on pipelines triggered on protected branches. Defaults to not_protected
  access_level       = "<not_protected OR ref_protected>"
}

The registration token can also be read in via SSM parameter store. If no registration token is passed in, the module will look up the token in the SSM parameter store at the location specified by secure_parameter_store_gitlab_runner_registration_token_name.

For migration to the new setup simply add the runner token to the parameter store. Once the runner is started it will lookup the required values via the parameter store. If the value is null a new runner will be registered and a new token created/stored.

# set the following variables, look up the variables in your Terraform config.
# see your Terraform variables to fill in the vars below.
aws-region=<${var.aws_region}>
token=<runner-token-see-your-gitlab-runner>
parameter-name=<${var.environment}>-<${var.secure_parameter_store_runner_token_key}>

aws ssm put-parameter --overwrite --type SecureString  --name "${parameter-name}" --value ${token} --region "${aws-region}"

Once you have created the parameter, you must remove the variable runners_token from your config. The next time your GitLab runner instance is created it will look up the token from the SSM parameter store.

Finally, the runner still supports the manual runner creation. No changes are required. Please keep in mind that this setup will be removed in future releases.

Auto Scaling Group

Scheduled scaling

When enable_schedule=true, the schedule_config variable can be used to scale the Auto Scaling group.

Scaling may be defined with one scale_out scheduled action and/or one scale_in scheduled action.

For example:

  module "runner" {
    # ...
    enable_schedule = true
    schedule_config = {
      # Configure optional scale_out scheduled action
      scale_out_recurrence = "0 8 * * 1-5"
      scale_out_count      = 1 # Default for min_size, desired_capacity and max_size
      # Override using: scale_out_min_size, scale_out_desired_capacity, scale_out_max_size

      # Configure optional scale_in scheduled action
      scale_in_recurrence  = "0 18 * * 1-5"
      scale_in_count       = 0 # Default for min_size, desired_capacity and max_size
      # Override using: scale_out_min_size, scale_out_desired_capacity, scale_out_max_size
    }
  }

Instance Termination

The Auto Scaling Group may be configured with a lifecycle hook that executes a provided Lambda function when the runner is terminated to terminate additional instances that were spawned.

The use of the termination lifecycle can be toggled using the asg_termination_lifecycle_hook_create variable.

When using this feature, a builds/ directory relative to the root module will persist that contains the packaged Lambda function.

Access runner instance

A few option are provided to access the runner instance:

  1. Access via the Session Manager (SSM) by setting enable_runner_ssm_access to true. The policy to allow access via SSM is not very restrictive.
  2. By setting none of the above, no keys or extra policies will be attached to the instance. You can still configure you own policies by attaching them to runner_agent_role_arn.

GitLab runner cache

By default the module creates a cache for the runner in S3. Old objects are automatically removed via a configurable life cycle policy on the bucket.

Creation of the bucket can be disabled and managed outside this module. A good use case is for sharing the cache across multiple runners. For this purpose the cache is implemented as a sub module. For more details see the cache module. An example implementation of this use case can be found in the runner-public example.

In case you enable the access logging for the S3 cache bucket, you have to add the following statement to your S3 logging bucket policy.

{
    "Sid": "Allow access logging",
    "Effect": "Allow",
    "Principal": {
        "Service": "logging.s3.amazonaws.com"
    },
    "Action": "s3:PutObject",
    "Resource": "<s3-arn>/*"
}

In case you manage the S3 cache bucket yourself it might be necessary to apply the cache before applying the runner module. A typical error message looks like:

Error: Invalid count argument
on .terraform/modules/gitlab_runner/main.tf line 400, in resource "aws_iam_role_policy_attachment" "docker_machine_cache_instance":
  count = var.cache_bucket["create"] || length(lookup(var.cache_bucket, "policy", "")) > 0 ? 1 : 0
The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many
instances will be created. To work around this, use the -target argument to first apply only the resources that the count
depends on.

The workaround is to use a terraform apply -target=module.cache followed by a terraform apply to apply everything else. This is a one time effort needed at the very beginning.

Usage

Configuration

Update the variables in terraform.tfvars according to your needs and add the following variables. See the previous step for instructions on how to obtain the token.

runner_name  = "NAME_OF_YOUR_RUNNER"
gitlab_url   = "GITLAB_URL"
runner_token = "RUNNER_TOKEN"

The base image used to host the GitLab Runner agent is the latest available Amazon Linux 2 HVM EBS AMI. In previous versions of this module a hard coded list of AMIs per region was provided. This list has been replaced by a search filter to find the latest AMI. Setting the filter to amzn2-ami-hvm-2.0.20200207.1-x86_64-ebs will allow you to version lock the target AMI.

Scenario: Basic usage

Below is a basic examples of usages of the module. Regarding the dependencies such as a VPC, have a look at the default example.

module "runner" {
  # https://registry.terraform.io/modules/cattle-ops/gitlab-runner/aws/
  source  = "cattle-ops/gitlab-runner/aws"

  aws_region  = "eu-west-1"
  environment = "spot-runners"

  vpc_id                   = module.vpc.vpc_id
  subnet_ids_gitlab_runner = module.vpc.private_subnets
  subnet_id_runners        = element(module.vpc.private_subnets, 0)

  runners_name       = "docker-default"
  runners_gitlab_url = "https://gitlab.com"

  gitlab_runner_registration_config = {
    registration_token = "my-token"
    tag_list           = "docker"
    description        = "runner default"
    locked_to_project  = "true"
    run_untagged       = "false"
    maximum_timeout    = "3600"
  }

}

Removing the module

As the module creates a number of resources during runtime (key pairs and spot instance requests), it needs a special procedure to remove them.

  1. Use the AWS Console to set the desired capacity of all auto scaling groups to 0. To find the correct ones use the var.environment as search criteria. Setting the desired capacity to 0 prevents AWS from creating new instances which will in turn create new resources.
  2. Kill all agent ec2 instances on the via AWS Console. This triggers a Lambda function in the background which removes all resources created during runtime of the EC2 instances.
  3. Wait 3 minutes so the Lambda function has enough time to delete the key pairs and spot instance requests.
  4. Run a terraform destroy or terraform apply (depends on your setup) to remove the module.

If you don't follow the above procedure key pairs and spot instance requests might survive the removal and might cause additional costs. But I have never seen that. You should also be fine by executing step 4 only.

Scenario: Multi-region deployment

Name clashes due to multi-region deployments for global AWS resources create by this module (IAM, S3) can be avoided by including a distinguishing region specific prefix via the cache_bucket_prefix string respectively via name_iam_objects in the overrides map. A simple example for this would be to set region-specific-prefix to the AWS region the module is deployed to.

module "runner" {
  # https://registry.terraform.io/modules/cattle-ops/gitlab-runner/aws/
  source  = "cattle-ops/gitlab-runner/aws"

  aws_region  = "eu-west-1"
  environment = "spot-runners"

  vpc_id                   = module.vpc.vpc_id
  subnet_ids_gitlab_runner = module.vpc.private_subnets
  subnet_id_runners        = element(module.vpc.private_subnets, 0)

  runners_name       = "docker-default"
  runners_gitlab_url = "https://gitlab.com"

  gitlab_runner_registration_config = {
    registration_token = "my-token"
    tag_list           = "docker"
    description        = "runner default"
    locked_to_project  = "true"
    run_untagged       = "false"
    maximum_timeout    = "3600"
  }

  overrides = {
    name_iam_objects = "<region-specific-prefix>-gitlab-runner-iam"
  }

  cache_bucket_prefix = "<region-specific-prefix>"
}

Scenario: Use of Spot Fleet

Since spot instances can be taken over by AWS depending on the instance type and AZ you are using, you may want multiple instances types in multiple AZs. This is where spot fleets come in, when there is no capacity on one instance type and one AZ, AWS will take the next instance type and so on. This update has been possible since the fork of docker-machine supports spot fleets.

We have seen that the fork of docker-machine this module is using consume more RAM using spot fleets. For comparison, if you launch 50 machines in the same time, it consumes ~1.2GB of RAM. In our case, we had to change the instance_type of the runner from t3.micro to t3.small.

Configuration example

module "runner" {
  # https://registry.terraform.io/modules/npalm/gitlab-runner/aws/
  source  = "npalm/gitlab-runner/aws"

  aws_region  = "eu-west-3"
  environment = "spot-runners"

  vpc_id                    = module.vpc.vpc_id
  subnet_id                 = module.vpc.private_subnets[0] # subnet of the agent
  fleet_executor_subnet_ids = module.vpc.private_subnets

  docker_machine_instance_types_fleet       = ["t3a.medium", "t3.medium", "t2.medium"]
  use_fleet                                 = true
  fleet_key_pair_name                       = "<key_pair_name>"

  runners_name       = "docker-machine"
  runners_gitlab_url = "https://gitlab.com"

  gitlab_runner_registration_config = {
    registration_token = "my-token"
    tag_list           = "docker"
    description        = "runner default"
    locked_to_project  = "true"
    run_untagged       = "false"
    maximum_timeout    = "3600"
  }

  overrides = {
    name_iam_objects = "<region-specific-prefix>-gitlab-runner-iam"
  }
}

Examples

A few examples are provided. Use the following steps to deploy. Ensure your AWS and Terraform environment is set up correctly. All commands below should be run from the terraform-aws-gitlab-runner/examples/<example-dir> directory. Don't forget to remove the runners manually from your Gitlab instance as soon as your are done.

Versions

The version of Terraform is locked down via tfenv, see the .terraform-version file for the expected versions. Providers are locked down as well in the providers.tf file.

Configure

The examples are configured with defaults that should work in general. The examples are in general configured for the region Ireland eu-west-1. The only parameter that needs to be provided is the GitLab registration token. The token can be found in GitLab in the runner section (global, group or repo scope). Create a file terraform.tfvars and the registration token.

    registration_token = "MY_TOKEN"

Run

Run terraform init to initialize Terraform. Next you can run terraform plan to inspect the resources that will be created.

To create the runner, run:

  terraform apply

To destroy the runner, run:

  terraform destroy

Contributors ✨

This project exists thanks to all the people who contribute.

Made with contributors-img.

Module Documentation

Requirements

Name Version
terraform >= 1.3
aws >= 4
local >= 2.4.0
tls >= 3

Providers

Name Version
aws 4.49.0
local 2.4.0
tls >= 3

Modules

Name Source Version
cache ./modules/cache n/a
terminate_agent_hook ./modules/terminate-agent-hook n/a

Resources

Name Type
aws_autoscaling_group.gitlab_runner_instance resource
aws_autoscaling_schedule.scale_in resource
aws_autoscaling_schedule.scale_out resource
aws_cloudwatch_log_group.environment resource
aws_eip.gitlab_runner resource
aws_iam_instance_profile.docker_machine resource
aws_iam_instance_profile.instance resource
aws_iam_policy.eip resource
aws_iam_policy.instance_docker_machine_policy resource
aws_iam_policy.instance_kms_policy resource
aws_iam_policy.instance_session_manager_policy resource
aws_iam_policy.service_linked_role resource
aws_iam_policy.ssm resource
aws_iam_role.docker_machine resource
aws_iam_role.instance resource
aws_iam_role_policy.instance resource
aws_iam_role_policy_attachment.docker_machine_cache_instance resource
aws_iam_role_policy_attachment.docker_machine_session_manager_aws_managed resource
aws_iam_role_policy_attachment.docker_machine_user_defined_policies resource
aws_iam_role_policy_attachment.eip resource
aws_iam_role_policy_attachment.instance_docker_machine_policy resource
aws_iam_role_policy_attachment.instance_kms_policy resource
aws_iam_role_policy_attachment.instance_session_manager_aws_managed resource
aws_iam_role_policy_attachment.instance_session_manager_policy resource
aws_iam_role_policy_attachment.service_linked_role resource
aws_iam_role_policy_attachment.ssm resource
aws_iam_role_policy_attachment.user_defined_policies resource
aws_key_pair.fleet resource
aws_kms_alias.default resource
aws_kms_key.default resource
aws_launch_template.fleet_gitlab_runner resource
aws_launch_template.gitlab_runner_instance resource
aws_security_group.docker_machine resource
aws_security_group.runner resource
aws_security_group_rule.docker_machine_docker_runner resource
aws_security_group_rule.docker_machine_docker_self resource
aws_security_group_rule.docker_machine_ping_runner resource
aws_security_group_rule.docker_machine_ping_self resource
aws_security_group_rule.docker_machine_ssh_runner resource
aws_security_group_rule.docker_machine_ssh_self resource
aws_security_group_rule.runner_ping_group resource
aws_ssm_parameter.runner_registration_token resource
aws_ssm_parameter.runner_sentry_dsn resource
local_file.config_toml resource
local_file.user_data resource
tls_private_key.fleet resource
aws_ami.docker-machine data source
aws_ami.runner data source
aws_availability_zone.runners data source
aws_caller_identity.current data source
aws_partition.current data source
aws_region.current data source
aws_subnet.runners data source

Inputs

Name Description Type Default Required
debug trace_runner_user_data: Enable bash trace for the user data script on the Agent. Be aware this could log sensitive data such as you GitLab runner token.
write_runner_config_to_file: When enabled, outputs the rendered config.toml file in the root module. Note that enabling this can
potentially expose sensitive information.
write_runner_user_data_to_file: When enabled, outputs the rendered userdata.sh file in the root module. Note that enabling this
can potentially expose sensitive information.
object({
trace_runner_user_data = optional(bool, false)
write_runner_config_to_file = optional(bool, false)
write_runner_user_data_to_file = optional(bool, false)
})
{} no
enable_managed_kms_key Let the module manage a KMS key. Be-aware of the costs of an custom key. Do not specify a kms_key_id when enable_kms is set to true. bool false no
environment A name that identifies the environment, used as prefix and for tagging. string n/a yes
iam_object_prefix Set the name prefix of all AWS IAM resources. string "" no
iam_permissions_boundary Name of permissions boundary policy to attach to AWS IAM roles string "" no
kms_key_id KMS key id to encrypt the resources. Ensure that CloudWatch and Runner/Runner Workers have access to the provided KMS key. string "" no
kms_managed_alias_name Alias added to the created KMS key. string "" no
kms_managed_deletion_rotation_window_in_days Key deletion/rotation window for the created KMS key. Set to 0 for no rotation/deletion window. number 7 no
runner_ami_filter List of maps used to create the AMI filter for the Runner AMI. Must resolve to an Amazon Linux 1 or 2 image. map(list(string))
{
"name": [
"amzn2-ami-hvm-2.*-x86_64-ebs"
]
}
no
runner_ami_owners The list of owners used to select the AMI of the Runner instance. list(string)
[
"amazon"
]
no
runner_cloudwatch enable = Boolean used to enable or disable the CloudWatch logging.
log_group_name = Option to override the default name (environment) of the log group. Requires enable = true.
retention_days = Retention for cloudwatch logs. Defaults to unlimited. Requires enable = true.
object({
enable = optional(bool, true)
log_group_name = optional(string, null)
retention_days = optional(number, 0)
})
{} no
runner_enable_asg_recreation Enable automatic redeployment of the Runner's ASG when the Launch Configs change. bool true no
runner_gitlab ca_certificate = Trusted CA certificate bundle (PEM format).
certificate = Certificate of the GitLab instance to connect to (PEM format).
registration_token = Registration token to use to register the Runner. Do not use. This is replaced by the registration_token in runner_gitlab_registration_config.
runner_version = Version of the GitLab Runner.
url = URL of the GitLab instance to connect to.
url_clone = URL of the GitLab instance to clone from. Use only if the agent can’t connect to the GitLab URL.
object({
ca_certificate = optional(string, "")
certificate = optional(string, "")
registration_token = optional(string, "REPLACED_BY_USER_DATA")
runner_version = optional(string, "15.8.2")
url = optional(string, "")
url_clone = optional(string, "")
})
n/a yes
runner_gitlab_registration_config Configuration used to register the Runner. See the README for an example, or reference the examples in the examples directory of this repo. There is also a good GitLab documentation available at: https://docs.gitlab.com/ee/ci/runners/configure_runners.html
object({
registration_token = optional(string, "")
tag_list = optional(string, "")
description = optional(string, "")
locked_to_project = optional(string, "")
run_untagged = optional(string, "")
maximum_timeout = optional(string, "")
access_level = optional(string, "not_protected") # this is the only mandatory field calling the GitLab get token for executor operation
})
{} no
runner_gitlab_registration_token_secure_parameter_store_name The name of the SSM parameter to read the GitLab Runner registration token from. string "gitlab-runner-registration-token" no
runner_gitlab_token_secure_parameter_store Name of the Secure Parameter Store entry to hold the GitLab Runner token. string "runner-token" no
runner_install amazon_ecr_credentials_helper = Install amazon-ecr-credential-helper inside userdata_pre_install script
docker_machine_download_url = URL to download docker machine binary. If not set, the docker machine version will be used to download the binary.
docker_machine_version = By default docker_machine_download_url is used to set the docker machine version. This version will be ignored once docker_machine_download_url is set. The version number is maintained by the CKI project. Check out at https://gitlab.com/cki-project/docker-machine/-/releases
pre_install_script = Script to run before installing the Runner
post_install_script = Script to run after installing the Runner
start_script = Script to run after starting the Runner
yum_update = Update the yum packages before installing the Runner
object({
amazon_ecr_credential_helper = optional(bool, false)
docker_machine_download_url = optional(string, "")
docker_machine_version = optional(string, "0.16.2-gitlab.19-cki.2")
pre_install_script = optional(string, "")
post_install_script = optional(string, "")
start_script = optional(string, "")
yum_update = optional(bool, true)
})
{} no
runner_instance additional_tags = Map of tags that will be added to the Runner instance.
collect_autoscaling_metrics = A list of metrics to collect. The allowed values are GroupDesiredCapacity, GroupInServiceCapacity, GroupPendingCapacity, GroupMinSize, GroupMaxSize, GroupInServiceInstances, GroupPendingInstances, GroupStandbyInstances, GroupStandbyCapacity, GroupTerminatingCapacity, GroupTerminatingInstances, GroupTotalCapacity, GroupTotalInstances.
ebs_optimized = Enable EBS optimization for the Runner instance.
max_lifetime_seconds = The maximum time a Runner should live before it is killed.
monitoring = Enable the detailed monitoring on the Runner instance.
name = Name of the Runner instance.
name_prefix = Set the name prefix and override the Name tag for the Runner instance.
private_address_only = Restrict the Runner to use private IP addresses only. If this is set to true the Runner will use a private IP address only in case the Runner Workers use private addresses only.
root_device_config = The Runner's root block device configuration. Takes the following keys: device_name, delete_on_termination, volume_type, volume_size, encrypted, iops, throughput, kms_key_id
spot_price = By setting a spot price bid price the Runner is created via a spot request. Be aware that spot instances can be stopped by AWS. Choose "on-demand-price" to pay up to the current on demand price for the instance type chosen.
ssm_access = Allows to connect to the Runner via SSM.
type = EC2 instance type used.
use_eip = Assigns an EIP to the Runner.
object({
additional_tags = optional(map(string))
collect_autoscaling_metrics = optional(list(string), null)
ebs_optimized = optional(bool, true)
max_lifetime_seconds = optional(number, null)
monitoring = optional(bool, true)
name = string
name_prefix = optional(string)
private_address_only = optional(bool, true)
root_device_config = optional(map(string), {})
spot_price = optional(string, null)
ssm_access = optional(bool, false)
type = optional(string, "t3.micro")
use_eip = optional(bool, false)
})
{
"name": "gitlab-runner"
}
no
runner_manager For details check https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section

gitlab_check_interval = Number of seconds between checking for available jobs (check_interval)
maximum_concurrent_jobs = The maximum number of jobs which can be processed by all Runners at the same time (concurrent).
prometheus_listen_address = Defines an address (:) the Prometheus metrics HTTP server should listen on (listen_address).
sentry_dsn = Sentry DSN of the project for the Runner Manager to use (uses legacy DSN format) (sentry_dsn)
object({
gitlab_check_interval = optional(number, 3)
maximum_concurrent_jobs = optional(number, 10)
prometheus_listen_address = optional(string, "")
sentry_dsn = optional(string, "SENTRY_DSN_REPLACED_BY_USER_DATA")
})
{} no
runner_metadata_options Enable the Runner instance metadata service. IMDSv2 is enabled by default.
object({
http_endpoint = string
http_tokens = string
http_put_response_hop_limit = number
instance_metadata_tags = string
})
{
"http_endpoint": "enabled",
"http_put_response_hop_limit": 2,
"http_tokens": "required",
"instance_metadata_tags": "disabled"
}
no
runner_networking allow_incoming_ping = Allow ICMP Ping to the Runner. Specify allow_incoming_ping_security_group_ids too!
allow_incoming_ping_security_group_ids = A list of security group ids that are allowed to ping the Runner.
security_group_description = A description for the Runner's security group
security_group_ids = IDs of security groups to add to the Runner.
object({
allow_incoming_ping = optional(bool, false)
allow_incoming_ping_security_group_ids = optional(list(string), [])
security_group_description = optional(string, "A security group containing gitlab-runner agent instances")
security_group_ids = optional(list(string), [])
})
{} no
runner_networking_egress_rules List of egress rules for the Runner.
list(object({
cidr_blocks = list(string)
ipv6_cidr_blocks = list(string)
prefix_list_ids = list(string)
from_port = number
protocol = string
security_groups = list(string)
self = bool
to_port = number
description = string
}))
[
{
"cidr_blocks": [
"0.0.0.0/0"
],
"description": null,
"from_port": 0,
"ipv6_cidr_blocks": [
"::/0"
],
"prefix_list_ids": null,
"protocol": "-1",
"security_groups": null,
"self": null,
"to_port": 0
}
]
no
runner_role additional_tags = Map of tags that will be added to the role created. Useful for tag based authorization.
allow_iam_service_linked_role_creation = Boolean used to control attaching the policy to the Runner to create service linked roles.
assume_role_policy_json = The assume role policy for the Runner.
create_role_profile = Whether to create the IAM role/profile for the Runner. If you provide your own role, make sure that it has the required permissions.
policy_arns = List of policy ARNs to be added to the instance profile of the Runner.
role_profile_name = IAM role/profile name for the Runner. If unspecified then ${var.iam_object_prefix}-instance is used.
object({
additional_tags = optional(map(string))
allow_iam_service_linked_role_creation = optional(bool, true)
assume_role_policy_json = optional(string, "")
create_role_profile = optional(bool, true)
policy_arns = optional(list(string), [])
role_profile_name = optional(string)
})
{} no
runner_schedule_config Map containing the configuration of the ASG scale-out and scale-in for the Runner. Will only be used if agent_schedule_enable is set to true. map(any)
{
"scale_in_count": 0,
"scale_in_recurrence": "0 18 * * 1-5",
"scale_in_time_zone": "Etc/UTC",
"scale_out_count": 1,
"scale_out_recurrence": "0 8 * * 1-5",
"scale_out_time_zone": "Etc/UTC"
}
no
runner_schedule_enable Set to true to enable the auto scaling group schedule for the Runner. bool false no
runner_sentry_secure_parameter_store_name The Sentry DSN name used to store the Sentry DSN in Secure Parameter Store string "sentry-dsn" no
runner_terminate_ec2_lifecycle_hook_name Specifies a custom name for the ASG terminate lifecycle hook and related resources. string null no
runner_terraform_timeout_delete_asg Timeout when trying to delete the Runner ASG. string "10m" no
runner_worker For detailed information, check https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section.

environment_variables = List of environment variables to add to the Runner Worker (environment).
max_jobs = Number of jobs which can be processed in parallel by the Runner Worker.
output_limit = Sets the maximum build log size in kilobytes. Default is 4MB (output_limit).
request_concurrency = Limit number of concurrent requests for new jobs from GitLab (default 1) (request_concurrency).
ssm_access = Allows to connect to the Runner Worker via SSM.
type = The Runner Worker type to use. Currently supports docker+machine or docker.
object({
environment_variables = optional(list(string), [])
max_jobs = optional(number, 0)
output_limit = optional(number, 4096)
request_concurrency = optional(number, 1)
ssm_access = optional(bool, false)
type = optional(string, "docker+machine")
})
{} no
runner_worker_cache Configuration to control the creation of the cache bucket. By default the bucket will be created and used as shared
cache. To use the same cache across multiple Runner Worker disable the creation of the cache and provide a policy and
bucket name. See the public runner example for more details."

For detailed documentation check https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runnerscaches3-section

access_log_bucker_id = The ID of the bucket where the access logs are stored.
access_log_bucket_prefix = The bucket prefix for the access logs.
authentication_type = A string that declares the AuthenticationType for [runners.cache.s3]. Can either be 'iam' or 'credentials'
bucket = Name of the cache bucket. Requires create = false.
bucket_prefix = Prefix for s3 cache bucket name. Requires create = true.
create = Boolean used to enable or disable the creation of the cache bucket.
expiration_days = Number of days before cache objects expire. Requires create = true.
include_account_id = Boolean used to include the account id in the cache bucket name. Requires create = true.
policy = Policy to use for the cache bucket. Requires create = false.
random_suffix = Boolean used to enable or disable the use of a random string suffix on the cache bucket name. Requires create = true.
shared = Boolean used to enable or disable the use of the cache bucket as shared cache.
versioning = Boolean used to enable versioning on the cache bucket. Requires create = true.
object({
access_log_bucket_id = optional(string, null)
access_log_bucket_prefix = optional(string, null)
authentication_type = optional(string, "iam")
bucket = optional(string, "")
bucket_prefix = optional(string, "")
create = optional(bool, true)
expiration_days = optional(number, 1)
include_account_id = optional(bool, true)
policy = optional(string, "")
random_suffix = optional(bool, false)
shared = optional(bool, false)
versioning = optional(bool, false)
})
{} no
runner_worker_docker_add_dind_volumes Add certificates and docker.sock to the volumes to support docker-in-docker (dind) bool false no
runner_worker_docker_machine_ami_filter List of maps used to create the AMI filter for the Runner Worker. map(list(string))
{
"name": [
"ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"
]
}
no
runner_worker_docker_machine_ami_owners The list of owners used to select the AMI of the Runner Worker. list(string)
[
"099720109477"
]
no
runner_worker_docker_machine_autoscaling_options Set autoscaling parameters based on periods, see https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runnersmachine-section
list(object({
periods = list(string)
idle_count = optional(number)
idle_scale_factor = optional(number)
idle_count_min = optional(number)
idle_time = optional(number)
timezone = optional(string, "UTC")
}))
[] no
runner_worker_docker_machine_ec2_metadata_options Enable the Runner Worker metadata service. Requires you use CKI maintained docker machines.
object({
http_tokens = string
http_put_response_hop_limit = number
})
{
"http_put_response_hop_limit": 2,
"http_tokens": "required"
}
no
runner_worker_docker_machine_ec2_options List of additional options for the docker+machine config. Each element of this list must be a key=value pair. E.g. '["amazonec2-zone=a"]' list(string) [] no
runner_worker_docker_machine_extra_egress_rules List of egress rules for the Runner Workers.
list(object({
cidr_blocks = list(string)
ipv6_cidr_blocks = list(string)
prefix_list_ids = list(string)
from_port = number
protocol = string
security_groups = list(string)
self = bool
to_port = number
description = string
}))
[
{
"cidr_blocks": [
"0.0.0.0/0"
],
"description": "Allow all egress traffic for Runner Workers.",
"from_port": 0,
"ipv6_cidr_blocks": [
"::/0"
],
"prefix_list_ids": null,
"protocol": "-1",
"security_groups": null,
"self": null,
"to_port": 0
}
]
no
runner_worker_docker_machine_fleet enable = Activates the fleet mode on the Runner. https://gitlab.com/cki-project/docker-machine/-/blob/v0.16.2-gitlab.19-cki.2/docs/drivers/aws.md#fleet-mode
key_pair_name = The name of the key pair used by the Runner to connect to the docker-machine Runner Workers. This variable is only supported when enables is set to true.
object({
enable = bool
key_pair_name = optional(string, "fleet-key")
})
{
"enable": false
}
no
runner_worker_docker_machine_instance For detailed documentation check https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runnersmachine-section

docker_registry_mirror_url = The URL of the Docker registry mirror to use for the Runner Worker.
destroy_after_max_builds = Destroy the instance after the maximum number of builds has been reached.
ebs_optimized = Enable EBS optimization for the Runner Worker.
idle_count = Number of idle Runner Worker instances (not working for the Docker Runner Worker) (IdleCount).
idle_time = Idle time of the Runner Worker before they are destroyed (not working for the Docker Runner Worker) (IdleTime).
monitoring = Enable detailed monitoring for the Runner Worker.
name_prefix = Set the name prefix and override the Name tag for the Runner Worker.
private_address_only = Restrict Runner Worker to the use of a private IP address. If runner_instance.use_private_address_only is set to true (default), runner_worker_docker_machine_instance.private_address_only will also apply for the Runner.
root_size = The size of the root volume for the Runner Worker.
start_script = Cloud-init user data that will be passed to the Runner Worker. Should not be base64 encrypted.
subnet_ids = The list of subnet IDs to use for the Runner Worker when the fleet mode is enabled.
types = The type of instance to use for the Runner Worker. In case of fleet mode, multiple instance types are supported.
volume_type = The type of volume to use for the Runner Worker.
object({
destroy_after_max_builds = optional(number, 0)
docker_registry_mirror_url = optional(string, "")
ebs_optimized = optional(bool, true)
idle_count = optional(number, 0)
idle_time = optional(number, 600)
monitoring = optional(bool, false)
name_prefix = optional(string, "")
private_address_only = optional(bool, true)
root_size = optional(number, 8)
start_script = optional(string, "")
subnet_ids = optional(list(string), [])
types = optional(list(string), ["m5.large"])
volume_type = optional(string, "gp2")
})
{} no
runner_worker_docker_machine_instance_spot enable = Enable spot instances for the Runner Worker.
max_price = The maximum price willing to pay. By default the price is limited by the current on demand price for the instance type chosen.
object({
enable = optional(bool, true)
max_price = optional(string, "on-demand-price")
})
{} no
runner_worker_docker_machine_role additional_tags = Map of tags that will be added to the Runner Worker.
assume_role_policy_json = Assume role policy for the Runner Worker.
policy_arns = List of ARNs of IAM policies to attach to the Runner Workers.
profile_name = Name of the IAM profile to attach to the Runner Workers.
object({
additional_tags = optional(map(string), {})
assume_role_policy_json = optional(string, "")
policy_arns = optional(list(string), [])
profile_name = optional(string, "")
})
{} no
runner_worker_docker_machine_security_group_description A description for the Runner Worker security group string "A security group containing Runner Worker instances" no
runner_worker_docker_options Options added to the [runners.docker] section of config.toml to configure the Docker container of the Runner Worker. For
details check https://docs.gitlab.com/runner/configuration/advanced-configuration.html

Default values if the option is not given:
disable_cache = "false"
image = "docker:18.03.1-ce"
privileged = "true"
pull_policy = "always"
shm_size = 0
tls_verify = "false"
volumes = "/cache"
object({
allowed_images = optional(list(string))
allowed_pull_policies = optional(list(string))
allowed_services = optional(list(string))
cache_dir = optional(string)
cap_add = optional(list(string))
cap_drop = optional(list(string))
container_labels = optional(list(string))
cpuset_cpus = optional(string)
cpu_shares = optional(number)
cpus = optional(string)
devices = optional(list(string))
device_cgroup_rules = optional(list(string))
disable_cache = optional(bool, false)
disable_entrypoint_overwrite = optional(bool)
dns = optional(list(string))
dns_search = optional(list(string))
extra_hosts = optional(list(string))
gpus = optional(string)
helper_image = optional(string)
helper_image_flavor = optional(string)
host = optional(string)
hostname = optional(string)
image = optional(string, "docker:18.03.1-ce")
isolation = optional(string)
links = optional(list(string))
mac_address = optional(string)
memory = optional(string)
memory_swap = optional(string)
memory_reservation = optional(string)
network_mode = optional(string)
oom_kill_disable = optional(bool)
oom_score_adjust = optional(number)
privileged = optional(bool, true)
pull_policies = optional(list(string), ["always"])
runtime = optional(string)
security_opt = optional(list(string))
shm_size = optional(number, 0)
sysctls = optional(list(string))
tls_cert_path = optional(string)
tls_verify = optional(bool, false)
user = optional(string)
userns_mode = optional(string)
volumes = optional(list(string), ["/cache"])
volumes_from = optional(list(string))
volume_driver = optional(string)
wait_for_services_timeout = optional(number)
})
{
"disable_cache": "false",
"image": "docker:18.03.1-ce",
"privileged": "true",
"pull_policy": "always",
"shm_size": 0,
"tls_verify": "false",
"volumes": [
"/cache"
]
}
no
runner_worker_docker_services Starts additional services with the Docker container. All fields must be set (examine the Dockerfile of the service image for the entrypoint - see ./examples/runner-default/main.tf)
list(object({
name = string
alias = string
entrypoint = list(string)
command = list(string)
}))
[] no
runner_worker_docker_services_volumes_tmpfs Mount a tmpfs in gitlab service container. https://docs.gitlab.com/runner/executors/docker.html#mounting-a-directory-in-ram
list(object({
volume = string
options = string
}))
[] no
runner_worker_docker_volumes_tmpfs Mount a tmpfs in Executor container. https://docs.gitlab.com/runner/executors/docker.html#mounting-a-directory-in-ram
list(object({
volume = string
options = string
}))
[] no
runner_worker_gitlab_pipeline post_build_script = Script to execute in the pipeline just after the build, but before executing after_script.
pre_build_script = Script to execute in the pipeline just before the build.
pre_clone_script = Script to execute in the pipeline before cloning the Git repository. this can be used to adjust the Git client configuration first, for example.
object({
post_build_script = optional(string, """")
pre_build_script = optional(string, """")
pre_clone_script = optional(string, """")
})
{} no
security_group_prefix Set the name prefix and overwrite the Name tag for all security groups. string "" no
subnet_id Subnet id used for the Runner and Runner Workers. Must belong to the vpc_id. In case the fleet mode is used, multiple subnets for
the Runner Workers can be provided with runner_worker_docker_machine_instance.subnet_ids.
string n/a yes
suppressed_tags List of tag keys which are automatically removed and never added as default tag by the module. list(string) [] no
tags Map of tags that will be added to created resources. By default resources will be tagged with name and environment. map(string) {} no
vpc_id The VPC used for the runner and runner workers. string n/a yes

Outputs

Name Description
runner_agent_role_arn ARN of the role used for the ec2 instance for the GitLab runner agent.
runner_agent_role_name Name of the role used for the ec2 instance for the GitLab runner agent.
runner_agent_sg_id ID of the security group attached to the GitLab runner agent.
runner_as_group_name Name of the autoscaling group for the gitlab-runner instance
runner_cache_bucket_arn ARN of the S3 for the build cache.
runner_cache_bucket_name Name of the S3 for the build cache.
runner_config_toml_rendered The rendered config.toml given to the Runner Manager.
runner_eip EIP of the Gitlab Runner
runner_launch_template_name The name of the runner's launch template.
runner_role_arn ARN of the role used for the docker machine runners.
runner_role_name Name of the role used for the docker machine runners.
runner_sg_id ID of the security group attached to the docker machine runners.
runner_user_data (Deprecated) The user data of the Gitlab Runner Agent's launch template. Set var.debug.output_runner_user_data_to_file to true to write user_data.sh.

More Repositories

1

graphql-java-demo

GraphQL Java Demo based on Spring Boot Starter
Java
50
star
2

action-docs

Generate docs for GitHub actions
TypeScript
44
star
3

terraform-aws-ecs-service

Terraform module to create ECS / FARGATE services
HCL
29
star
4

blog-graphql-spring-service

A GraphQL enabled Sprig Boot Services
Java
12
star
5

cross-cluster-mesh-postcard

Cross k8s cluster mesh with Istio on EKS and GKE
Shell
10
star
6

docker-discovery-agent

Discovery agent for docker exposed ports
Go
10
star
7

aws-auth

Shell functions to handle secrets stored in pass, LastPass or Keychain for AWS
Shell
7
star
8

graphiql

GraphiQL webclient
JavaScript
7
star
9

action-docs-action

Action to update GitHub action documentation
TypeScript
5
star
10

tf-helloworld-demo

Terraform AWS hello world demo
Shell
4
star
11

aws-amplify-deploy

Automated deployment for AWS Amplify manual deploy
Shell
4
star
12

blog_terraform_aws_fargate

Code sample for blog post about Terraform and Fargate
HCL
4
star
13

tf-ecs-demo

Terraform ECS demo
HCL
3
star
14

terraform-aws-ecs-instances

AWS Terraform module to create ECS cluster instances.
HCL
3
star
15

gh-runner-aws-spot-experiment

GitHub action self-hosted runner on AWS spot (experiment)
HCL
3
star
16

action-app-token

GitHub action to obtain app installation token
TypeScript
3
star
17

graphql-js-demo

GraphQL JavaScript demo based on Apollo
JavaScript
3
star
18

terraform-aws-lambda-github-app

Terraform module for GitHub App Lambda webhook
HCL
3
star
19

terraform-aws-vpc

Terraform module for creating a VPC on AWS
HCL
3
star
20

dind-java

Docker in Docker java image
2
star
21

plantuml-server

2
star
22

terraform-aws-dockercloud-swarm-role

Terraform module to create an AWS IAM role for Docker Cloud Swarm
HCL
2
star
23

terraform-cdk-blog-experiment

TypeScript
1
star
24

cd-docker

cd-docker
Shell
1
star
25

simple-docker-cluster

JavaScript
1
star
26

2021-03-fontys-example-ci

CI / CD example for Fontys students
TypeScript
1
star
27

tf-gr-2

HCL
1
star
28

graphql-slides-20171019

GraphQL talk @ geecon Prague
JavaScript
1
star
29

dependabot-test

Test repo
1
star
30

graphql-prisma-demo

GraphQL prisma demo
JavaScript
1
star
31

docker-introduction

docker-introduction
CSS
1
star
32

graphql-rest-adapter-node

Examples GraphQL API on top of REST API
TypeScript
1
star
33

aws-lambda-run-local

Example of running an AWS Lambda locally
TypeScript
1
star
34

graphql-apollo-link-rest-react-sample

GraphQL Apollo Link React example
TypeScript
1
star
35

sonarqube

Docker sonarqube image
Shell
1
star