Terraform module for GitLab auto scaling runners on AWS spot instances
- The module
- Prerequisites
- Usage
- Examples
- Contributors ✨
- Requirements
- Providers
- Modules
- Resources
- Inputs
- Outputs
The module
This Terraform modules creates a GitLab CI runner. A blog post describes the original version of the runner. See the post at 040code. The original setup of the module is based on the blog post: Auto scale GitLab CI runners and save 90% on EC2 costs.
💥 BREAKING CHANGE AHEAD: Version 7 of the module rewrites the whole variable section to
- harmonize the variable names
- harmonize the documentation
- remove deprecated variables
- gain a better overview of the features provided
And it also adds
- all possible Docker settings
- the
idle_scale_factor
We know that this is a breaking change causing some pain, but we think it is worth it. We hope you agree. And to make the transition as smooth as possible, we have added a migration script to the
migrations
folder. It will cover almost all cases, but some minor rework might still be possible.Checkout issue 819
The runners created by the module use spot instances by default for running the builds using the docker+machine
executor.
- Shared cache in S3 with life cycle management to clear objects after x days.
- Logs streamed to CloudWatch.
- Runner agents registered automatically.
The name of the runner agent and runner is set with the overrides variable. Adding an agent runner name tag does not work.
# ...
overrides = {
name_sg = ""
name_runner_agent_instance = "Gitlab Runner Agent"
name_docker_machine_runners = "Gitlab Runner Terraform"
name_iam_objects = "gitlab-runner"
}
//this doesn't work
agent_tags = merge(local.my_tags, map("Name", "Gitlab Runner Agent"))
The runner supports 3 main scenarios:
GitLab CI docker-machine runner - one runner agent
In this scenario the runner agent is running on a single EC2 node and runners are created by docker machine using spot instances. Runners will scale automatically based on the configuration. The module creates a S3 cache by default, which is shared across runners (spot instances).
GitLab CI docker-machine runner - multiple runner agents
In this scenario the multiple runner agents can be created with different configuration by instantiating the module multiple times. Runners will scale automatically based on the configuration. The S3 cache can be shared across runners by managing the cache outside of the module.
GitLab Ci docker runner
In this scenario not docker machine is used but docker to schedule the builds. Builds will run on the same EC2 instance as the agent. No auto scaling is supported.
Prerequisites
Terraform
Ensure you have Terraform installed. The modules is based on Terraform 0.11, see .terraform-version
for the used version. A handy
tool to mange your Terraform version is tfenv.
On macOS it is simple to install tfenv
using brew
.
brew install tfenv
Next install a Terraform version.
tfenv install <version>
AWS
Ensure you have setup your AWS credentials. The module requires access to IAM, EC2, CloudWatch, S3 and SSM.
JQ & AWS CLI
In order to be able to destroy the module, you will need to run from a host with both jq
and aws
installed and accessible in
the environment.
On macOS it is simple to install them using brew
.
brew install jq awscli
Service linked roles
The GitLab runner EC2 instance requires the following service linked roles:
- AWSServiceRoleForAutoScaling
- AWSServiceRoleForEC2Spot
By default the EC2 instance is allowed to create the required roles, but this can be disabled by setting the option
allow_iam_service_linked_role_creation
to false
. If disabled you must ensure the roles exist. You can create them manually or
via Terraform.
resource "aws_iam_service_linked_role" "spot" {
aws_service_name = "spot.amazonaws.com"
}
resource "aws_iam_service_linked_role" "autoscaling" {
aws_service_name = "autoscaling.amazonaws.com"
}
KMS keys
If a KMS key is set via kms_key_id
, make sure that you also give proper access to the key. Otherwise, you might
get errors, e.g. the build cache can't be decrypted or logging via CloudWatch is not possible. For a CloudWatch
example checkout kms-policy.json
GitLab runner token configuration
By default the runner is registered on initial deployment. In previous versions of this module this was a manual process. The manual process is still supported but will be removed in future releases. The runner token will be stored in the AWS SSM parameter store. See example for more details.
To register the runner automatically set the variable gitlab_runner_registration_config["registration_token"]
. This token value
can be found in your GitLab project, group, or global settings. For a generic runner you can find the token in the admin section.
By default the runner will be locked to the target project, not run untagged. Below is an example of the configuration map.
gitlab_runner_registration_config = {
registration_token = "<registration token>"
tag_list = "<your tags, comma separated>"
description = "<some description>"
locked_to_project = "true"
run_untagged = "false"
maximum_timeout = "3600"
# ref_protected runner will only run on pipelines triggered on protected branches. Defaults to not_protected
access_level = "<not_protected OR ref_protected>"
}
The registration token can also be read in via SSM parameter store. If no registration token is passed in, the module
will look up the token in the SSM parameter store at the location specified by secure_parameter_store_gitlab_runner_registration_token_name
.
For migration to the new setup simply add the runner token to the parameter store. Once the runner is started it will lookup the
required values via the parameter store. If the value is null
a new runner will be registered and a new token created/stored.
# set the following variables, look up the variables in your Terraform config.
# see your Terraform variables to fill in the vars below.
aws-region=<${var.aws_region}>
token=<runner-token-see-your-gitlab-runner>
parameter-name=<${var.environment}>-<${var.secure_parameter_store_runner_token_key}>
aws ssm put-parameter --overwrite --type SecureString --name "${parameter-name}" --value ${token} --region "${aws-region}"
Once you have created the parameter, you must remove the variable runners_token
from your config. The next time your GitLab
runner instance is created it will look up the token from the SSM parameter store.
Finally, the runner still supports the manual runner creation. No changes are required. Please keep in mind that this setup will be removed in future releases.
Auto Scaling Group
Scheduled scaling
When enable_schedule=true
, the schedule_config
variable can be used to scale the Auto Scaling group.
Scaling may be defined with one scale_out
scheduled action and/or one scale_in
scheduled action.
For example:
module "runner" {
# ...
enable_schedule = true
schedule_config = {
# Configure optional scale_out scheduled action
scale_out_recurrence = "0 8 * * 1-5"
scale_out_count = 1 # Default for min_size, desired_capacity and max_size
# Override using: scale_out_min_size, scale_out_desired_capacity, scale_out_max_size
# Configure optional scale_in scheduled action
scale_in_recurrence = "0 18 * * 1-5"
scale_in_count = 0 # Default for min_size, desired_capacity and max_size
# Override using: scale_out_min_size, scale_out_desired_capacity, scale_out_max_size
}
}
Instance Termination
The Auto Scaling Group may be configured with a lifecycle hook that executes a provided Lambda function when the runner is terminated to terminate additional instances that were spawned.
The use of the termination lifecycle can be toggled using the asg_termination_lifecycle_hook_create
variable.
When using this feature, a builds/
directory relative to the root module will persist that contains the packaged Lambda function.
Access runner instance
A few option are provided to access the runner instance:
- Access via the Session Manager (SSM) by setting
enable_runner_ssm_access
totrue
. The policy to allow access via SSM is not very restrictive. - By setting none of the above, no keys or extra policies will be attached to the instance. You can still configure you own
policies by attaching them to
runner_agent_role_arn
.
GitLab runner cache
By default the module creates a cache for the runner in S3. Old objects are automatically removed via a configurable life cycle policy on the bucket.
Creation of the bucket can be disabled and managed outside this module. A good use case is for sharing the cache across multiple runners. For this purpose the cache is implemented as a sub module. For more details see the cache module. An example implementation of this use case can be found in the runner-public example.
In case you enable the access logging for the S3 cache bucket, you have to add the following statement to your S3 logging bucket policy.
{
"Sid": "Allow access logging",
"Effect": "Allow",
"Principal": {
"Service": "logging.s3.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "<s3-arn>/*"
}
In case you manage the S3 cache bucket yourself it might be necessary to apply the cache before applying the runner module. A typical error message looks like:
Error: Invalid count argument
on .terraform/modules/gitlab_runner/main.tf line 400, in resource "aws_iam_role_policy_attachment" "docker_machine_cache_instance":
count = var.cache_bucket["create"] || length(lookup(var.cache_bucket, "policy", "")) > 0 ? 1 : 0
The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many
instances will be created. To work around this, use the -target argument to first apply only the resources that the count
depends on.
The workaround is to use a terraform apply -target=module.cache
followed by a terraform apply
to apply everything else. This is
a one time effort needed at the very beginning.
Usage
Configuration
Update the variables in terraform.tfvars
according to your needs and add the following variables. See the previous step for
instructions on how to obtain the token.
runner_name = "NAME_OF_YOUR_RUNNER"
gitlab_url = "GITLAB_URL"
runner_token = "RUNNER_TOKEN"
The base image used to host the GitLab Runner agent is the latest available Amazon Linux 2 HVM EBS AMI. In previous versions of
this module a hard coded list of AMIs per region was provided. This list has been replaced by a search filter to find the latest
AMI. Setting the filter to amzn2-ami-hvm-2.0.20200207.1-x86_64-ebs
will allow you to version lock the target AMI.
Scenario: Basic usage
Below is a basic examples of usages of the module. Regarding the dependencies such as a VPC, have a look at the default example.
module "runner" {
# https://registry.terraform.io/modules/cattle-ops/gitlab-runner/aws/
source = "cattle-ops/gitlab-runner/aws"
aws_region = "eu-west-1"
environment = "spot-runners"
vpc_id = module.vpc.vpc_id
subnet_ids_gitlab_runner = module.vpc.private_subnets
subnet_id_runners = element(module.vpc.private_subnets, 0)
runners_name = "docker-default"
runners_gitlab_url = "https://gitlab.com"
gitlab_runner_registration_config = {
registration_token = "my-token"
tag_list = "docker"
description = "runner default"
locked_to_project = "true"
run_untagged = "false"
maximum_timeout = "3600"
}
}
Removing the module
As the module creates a number of resources during runtime (key pairs and spot instance requests), it needs a special procedure to remove them.
- Use the AWS Console to set the desired capacity of all auto scaling groups to 0. To find the correct ones use the
var.environment
as search criteria. Setting the desired capacity to 0 prevents AWS from creating new instances which will in turn create new resources. - Kill all agent ec2 instances on the via AWS Console. This triggers a Lambda function in the background which removes all resources created during runtime of the EC2 instances.
- Wait 3 minutes so the Lambda function has enough time to delete the key pairs and spot instance requests.
- Run a
terraform destroy
orterraform apply
(depends on your setup) to remove the module.
If you don't follow the above procedure key pairs and spot instance requests might survive the removal and might cause additional costs. But I have never seen that. You should also be fine by executing step 4 only.
Scenario: Multi-region deployment
Name clashes due to multi-region deployments for global AWS resources create by this module (IAM, S3) can be avoided by including a distinguishing region specific prefix via the cache_bucket_prefix string respectively via name_iam_objects in the overrides map. A simple example for this would be to set region-specific-prefix to the AWS region the module is deployed to.
module "runner" {
# https://registry.terraform.io/modules/cattle-ops/gitlab-runner/aws/
source = "cattle-ops/gitlab-runner/aws"
aws_region = "eu-west-1"
environment = "spot-runners"
vpc_id = module.vpc.vpc_id
subnet_ids_gitlab_runner = module.vpc.private_subnets
subnet_id_runners = element(module.vpc.private_subnets, 0)
runners_name = "docker-default"
runners_gitlab_url = "https://gitlab.com"
gitlab_runner_registration_config = {
registration_token = "my-token"
tag_list = "docker"
description = "runner default"
locked_to_project = "true"
run_untagged = "false"
maximum_timeout = "3600"
}
overrides = {
name_iam_objects = "<region-specific-prefix>-gitlab-runner-iam"
}
cache_bucket_prefix = "<region-specific-prefix>"
}
Scenario: Use of Spot Fleet
Since spot instances can be taken over by AWS depending on the instance type and AZ you are using, you may want multiple instances types in multiple AZs. This is where spot fleets come in, when there is no capacity on one instance type and one AZ, AWS will take the next instance type and so on. This update has been possible since the fork of docker-machine supports spot fleets.
We have seen that the fork of docker-machine this
module is using consume more RAM using spot fleets. For comparison, if you launch 50 machines in the same time, it consumes
~1.2GB of RAM. In our case, we had to change the instance_type
of the runner from t3.micro
to t3.small
.
Configuration example
module "runner" {
# https://registry.terraform.io/modules/npalm/gitlab-runner/aws/
source = "npalm/gitlab-runner/aws"
aws_region = "eu-west-3"
environment = "spot-runners"
vpc_id = module.vpc.vpc_id
subnet_id = module.vpc.private_subnets[0] # subnet of the agent
fleet_executor_subnet_ids = module.vpc.private_subnets
docker_machine_instance_types_fleet = ["t3a.medium", "t3.medium", "t2.medium"]
use_fleet = true
fleet_key_pair_name = "<key_pair_name>"
runners_name = "docker-machine"
runners_gitlab_url = "https://gitlab.com"
gitlab_runner_registration_config = {
registration_token = "my-token"
tag_list = "docker"
description = "runner default"
locked_to_project = "true"
run_untagged = "false"
maximum_timeout = "3600"
}
overrides = {
name_iam_objects = "<region-specific-prefix>-gitlab-runner-iam"
}
}
Examples
A few examples are provided. Use the
following steps to deploy. Ensure your AWS and Terraform environment is set up correctly. All commands below should be
run from the terraform-aws-gitlab-runner/examples/<example-dir>
directory. Don't forget to remove the runners
manually from your Gitlab instance as soon as your are done.
Versions
The version of Terraform is locked down via tfenv, see the .terraform-version
file for the expected versions.
Providers are locked down as well in the providers.tf
file.
Configure
The examples are configured with defaults that should work in general. The examples are in general configured for the
region Ireland eu-west-1
. The only parameter that needs to be provided is the GitLab registration token. The token can be
found in GitLab in the runner section (global, group or repo scope). Create a file terraform.tfvars
and the registration token.
registration_token = "MY_TOKEN"
Run
Run terraform init
to initialize Terraform. Next you can run terraform plan
to inspect the resources that will be created.
To create the runner, run:
terraform apply
To destroy the runner, run:
terraform destroy
Contributors ✨
This project exists thanks to all the people who contribute.
Made with contributors-img.
Module Documentation
Requirements
Name | Version |
---|---|
terraform | >= 1.3 |
aws | >= 4 |
local | >= 2.4.0 |
tls | >= 3 |
Providers
Name | Version |
---|---|
aws | 4.49.0 |
local | 2.4.0 |
tls | >= 3 |
Modules
Name | Source | Version |
---|---|---|
cache | ./modules/cache | n/a |
terminate_agent_hook | ./modules/terminate-agent-hook | n/a |
Resources
Inputs
Name | Description | Type | Default | Required |
---|---|---|---|---|
debug | trace_runner_user_data: Enable bash trace for the user data script on the Agent. Be aware this could log sensitive data such as you GitLab runner token. write_runner_config_to_file: When enabled, outputs the rendered config.toml file in the root module. Note that enabling this can potentially expose sensitive information. write_runner_user_data_to_file: When enabled, outputs the rendered userdata.sh file in the root module. Note that enabling this can potentially expose sensitive information. |
object({ |
{} |
no |
enable_managed_kms_key | Let the module manage a KMS key. Be-aware of the costs of an custom key. Do not specify a kms_key_id when enable_kms is set to true . |
bool |
false |
no |
environment | A name that identifies the environment, used as prefix and for tagging. | string |
n/a | yes |
iam_object_prefix | Set the name prefix of all AWS IAM resources. | string |
"" |
no |
iam_permissions_boundary | Name of permissions boundary policy to attach to AWS IAM roles | string |
"" |
no |
kms_key_id | KMS key id to encrypt the resources. Ensure that CloudWatch and Runner/Runner Workers have access to the provided KMS key. | string |
"" |
no |
kms_managed_alias_name | Alias added to the created KMS key. | string |
"" |
no |
kms_managed_deletion_rotation_window_in_days | Key deletion/rotation window for the created KMS key. Set to 0 for no rotation/deletion window. | number |
7 |
no |
runner_ami_filter | List of maps used to create the AMI filter for the Runner AMI. Must resolve to an Amazon Linux 1 or 2 image. | map(list(string)) |
{ |
no |
runner_ami_owners | The list of owners used to select the AMI of the Runner instance. | list(string) |
[ |
no |
runner_cloudwatch | enable = Boolean used to enable or disable the CloudWatch logging. log_group_name = Option to override the default name ( environment ) of the log group. Requires enable = true .retention_days = Retention for cloudwatch logs. Defaults to unlimited. Requires enable = true . |
object({ |
{} |
no |
runner_enable_asg_recreation | Enable automatic redeployment of the Runner's ASG when the Launch Configs change. | bool |
true |
no |
runner_gitlab | ca_certificate = Trusted CA certificate bundle (PEM format). certificate = Certificate of the GitLab instance to connect to (PEM format). registration_token = Registration token to use to register the Runner. Do not use. This is replaced by the registration_token in runner_gitlab_registration_config .runner_version = Version of the GitLab Runner. url = URL of the GitLab instance to connect to. url_clone = URL of the GitLab instance to clone from. Use only if the agent can’t connect to the GitLab URL. |
object({ |
n/a | yes |
runner_gitlab_registration_config | Configuration used to register the Runner. See the README for an example, or reference the examples in the examples directory of this repo. There is also a good GitLab documentation available at: https://docs.gitlab.com/ee/ci/runners/configure_runners.html | object({ |
{} |
no |
runner_gitlab_registration_token_secure_parameter_store_name | The name of the SSM parameter to read the GitLab Runner registration token from. | string |
"gitlab-runner-registration-token" |
no |
runner_gitlab_token_secure_parameter_store | Name of the Secure Parameter Store entry to hold the GitLab Runner token. | string |
"runner-token" |
no |
runner_install | amazon_ecr_credentials_helper = Install amazon-ecr-credential-helper inside userdata_pre_install scriptdocker_machine_download_url = URL to download docker machine binary. If not set, the docker machine version will be used to download the binary. docker_machine_version = By default docker_machine_download_url is used to set the docker machine version. This version will be ignored once docker_machine_download_url is set. The version number is maintained by the CKI project. Check out at https://gitlab.com/cki-project/docker-machine/-/releasespre_install_script = Script to run before installing the Runner post_install_script = Script to run after installing the Runner start_script = Script to run after starting the Runner yum_update = Update the yum packages before installing the Runner |
object({ |
{} |
no |
runner_instance | additional_tags = Map of tags that will be added to the Runner instance. collect_autoscaling_metrics = A list of metrics to collect. The allowed values are GroupDesiredCapacity, GroupInServiceCapacity, GroupPendingCapacity, GroupMinSize, GroupMaxSize, GroupInServiceInstances, GroupPendingInstances, GroupStandbyInstances, GroupStandbyCapacity, GroupTerminatingCapacity, GroupTerminatingInstances, GroupTotalCapacity, GroupTotalInstances. ebs_optimized = Enable EBS optimization for the Runner instance. max_lifetime_seconds = The maximum time a Runner should live before it is killed. monitoring = Enable the detailed monitoring on the Runner instance. name = Name of the Runner instance. name_prefix = Set the name prefix and override the Name tag for the Runner instance.private_address_only = Restrict the Runner to use private IP addresses only. If this is set to true the Runner will use a private IP address only in case the Runner Workers use private addresses only.root_device_config = The Runner's root block device configuration. Takes the following keys: device_name , delete_on_termination , volume_type , volume_size , encrypted , iops , throughput , kms_key_id spot_price = By setting a spot price bid price the Runner is created via a spot request. Be aware that spot instances can be stopped by AWS. Choose "on-demand-price" to pay up to the current on demand price for the instance type chosen. ssm_access = Allows to connect to the Runner via SSM. type = EC2 instance type used. use_eip = Assigns an EIP to the Runner. |
object({ |
{ |
no |
runner_manager | For details check https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section gitlab_check_interval = Number of seconds between checking for available jobs (check_interval) maximum_concurrent_jobs = The maximum number of jobs which can be processed by all Runners at the same time (concurrent). prometheus_listen_address = Defines an address (:) the Prometheus metrics HTTP server should listen on (listen_address). sentry_dsn = Sentry DSN of the project for the Runner Manager to use (uses legacy DSN format) (sentry_dsn) |
object({ |
{} |
no |
runner_metadata_options | Enable the Runner instance metadata service. IMDSv2 is enabled by default. | object({ |
{ |
no |
runner_networking | allow_incoming_ping = Allow ICMP Ping to the Runner. Specify allow_incoming_ping_security_group_ids too!allow_incoming_ping_security_group_ids = A list of security group ids that are allowed to ping the Runner. security_group_description = A description for the Runner's security group security_group_ids = IDs of security groups to add to the Runner. |
object({ |
{} |
no |
runner_networking_egress_rules | List of egress rules for the Runner. | list(object({ |
[ |
no |
runner_role | additional_tags = Map of tags that will be added to the role created. Useful for tag based authorization. allow_iam_service_linked_role_creation = Boolean used to control attaching the policy to the Runner to create service linked roles. assume_role_policy_json = The assume role policy for the Runner. create_role_profile = Whether to create the IAM role/profile for the Runner. If you provide your own role, make sure that it has the required permissions. policy_arns = List of policy ARNs to be added to the instance profile of the Runner. role_profile_name = IAM role/profile name for the Runner. If unspecified then ${var.iam_object_prefix}-instance is used. |
object({ |
{} |
no |
runner_schedule_config | Map containing the configuration of the ASG scale-out and scale-in for the Runner. Will only be used if agent_schedule_enable is set to true . |
map(any) |
{ |
no |
runner_schedule_enable | Set to true to enable the auto scaling group schedule for the Runner. |
bool |
false |
no |
runner_sentry_secure_parameter_store_name | The Sentry DSN name used to store the Sentry DSN in Secure Parameter Store | string |
"sentry-dsn" |
no |
runner_terminate_ec2_lifecycle_hook_name | Specifies a custom name for the ASG terminate lifecycle hook and related resources. | string |
null |
no |
runner_terraform_timeout_delete_asg | Timeout when trying to delete the Runner ASG. | string |
"10m" |
no |
runner_worker | For detailed information, check https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section. environment_variables = List of environment variables to add to the Runner Worker (environment). max_jobs = Number of jobs which can be processed in parallel by the Runner Worker. output_limit = Sets the maximum build log size in kilobytes. Default is 4MB (output_limit). request_concurrency = Limit number of concurrent requests for new jobs from GitLab (default 1) (request_concurrency). ssm_access = Allows to connect to the Runner Worker via SSM. type = The Runner Worker type to use. Currently supports docker+machine or docker . |
object({ |
{} |
no |
runner_worker_cache | Configuration to control the creation of the cache bucket. By default the bucket will be created and used as shared cache. To use the same cache across multiple Runner Worker disable the creation of the cache and provide a policy and bucket name. See the public runner example for more details." For detailed documentation check https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runnerscaches3-section access_log_bucker_id = The ID of the bucket where the access logs are stored. access_log_bucket_prefix = The bucket prefix for the access logs. authentication_type = A string that declares the AuthenticationType for [runners.cache.s3]. Can either be 'iam' or 'credentials' bucket = Name of the cache bucket. Requires create = false .bucket_prefix = Prefix for s3 cache bucket name. Requires create = true .create = Boolean used to enable or disable the creation of the cache bucket. expiration_days = Number of days before cache objects expire. Requires create = true .include_account_id = Boolean used to include the account id in the cache bucket name. Requires create = true .policy = Policy to use for the cache bucket. Requires create = false .random_suffix = Boolean used to enable or disable the use of a random string suffix on the cache bucket name. Requires create = true .shared = Boolean used to enable or disable the use of the cache bucket as shared cache. versioning = Boolean used to enable versioning on the cache bucket. Requires create = true . |
object({ |
{} |
no |
runner_worker_docker_add_dind_volumes | Add certificates and docker.sock to the volumes to support docker-in-docker (dind) | bool |
false |
no |
runner_worker_docker_machine_ami_filter | List of maps used to create the AMI filter for the Runner Worker. | map(list(string)) |
{ |
no |
runner_worker_docker_machine_ami_owners | The list of owners used to select the AMI of the Runner Worker. | list(string) |
[ |
no |
runner_worker_docker_machine_autoscaling_options | Set autoscaling parameters based on periods, see https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runnersmachine-section | list(object({ |
[] |
no |
runner_worker_docker_machine_ec2_metadata_options | Enable the Runner Worker metadata service. Requires you use CKI maintained docker machines. | object({ |
{ |
no |
runner_worker_docker_machine_ec2_options | List of additional options for the docker+machine config. Each element of this list must be a key=value pair. E.g. '["amazonec2-zone=a"]' | list(string) |
[] |
no |
runner_worker_docker_machine_extra_egress_rules | List of egress rules for the Runner Workers. | list(object({ |
[ |
no |
runner_worker_docker_machine_fleet | enable = Activates the fleet mode on the Runner. https://gitlab.com/cki-project/docker-machine/-/blob/v0.16.2-gitlab.19-cki.2/docs/drivers/aws.md#fleet-mode key_pair_name = The name of the key pair used by the Runner to connect to the docker-machine Runner Workers. This variable is only supported when enables is set to true . |
object({ |
{ |
no |
runner_worker_docker_machine_instance | For detailed documentation check https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runnersmachine-section docker_registry_mirror_url = The URL of the Docker registry mirror to use for the Runner Worker. destroy_after_max_builds = Destroy the instance after the maximum number of builds has been reached. ebs_optimized = Enable EBS optimization for the Runner Worker. idle_count = Number of idle Runner Worker instances (not working for the Docker Runner Worker) (IdleCount). idle_time = Idle time of the Runner Worker before they are destroyed (not working for the Docker Runner Worker) (IdleTime). monitoring = Enable detailed monitoring for the Runner Worker. name_prefix = Set the name prefix and override the Name tag for the Runner Worker.private_address_only = Restrict Runner Worker to the use of a private IP address. If runner_instance.use_private_address_only is set to true (default), runner_worker_docker_machine_instance.private_address_only will also apply for the Runner.root_size = The size of the root volume for the Runner Worker. start_script = Cloud-init user data that will be passed to the Runner Worker. Should not be base64 encrypted. subnet_ids = The list of subnet IDs to use for the Runner Worker when the fleet mode is enabled. types = The type of instance to use for the Runner Worker. In case of fleet mode, multiple instance types are supported. volume_type = The type of volume to use for the Runner Worker. |
object({ |
{} |
no |
runner_worker_docker_machine_instance_spot | enable = Enable spot instances for the Runner Worker. max_price = The maximum price willing to pay. By default the price is limited by the current on demand price for the instance type chosen. |
object({ |
{} |
no |
runner_worker_docker_machine_role | additional_tags = Map of tags that will be added to the Runner Worker. assume_role_policy_json = Assume role policy for the Runner Worker. policy_arns = List of ARNs of IAM policies to attach to the Runner Workers. profile_name = Name of the IAM profile to attach to the Runner Workers. |
object({ |
{} |
no |
runner_worker_docker_machine_security_group_description | A description for the Runner Worker security group | string |
"A security group containing Runner Worker instances" |
no |
runner_worker_docker_options | Options added to the [runners.docker] section of config.toml to configure the Docker container of the Runner Worker. For details check https://docs.gitlab.com/runner/configuration/advanced-configuration.html Default values if the option is not given: disable_cache = "false" image = "docker:18.03.1-ce" privileged = "true" pull_policy = "always" shm_size = 0 tls_verify = "false" volumes = "/cache" |
object({ |
{ |
no |
runner_worker_docker_services | Starts additional services with the Docker container. All fields must be set (examine the Dockerfile of the service image for the entrypoint - see ./examples/runner-default/main.tf) | list(object({ |
[] |
no |
runner_worker_docker_services_volumes_tmpfs | Mount a tmpfs in gitlab service container. https://docs.gitlab.com/runner/executors/docker.html#mounting-a-directory-in-ram | list(object({ |
[] |
no |
runner_worker_docker_volumes_tmpfs | Mount a tmpfs in Executor container. https://docs.gitlab.com/runner/executors/docker.html#mounting-a-directory-in-ram | list(object({ |
[] |
no |
runner_worker_gitlab_pipeline | post_build_script = Script to execute in the pipeline just after the build, but before executing after_script. pre_build_script = Script to execute in the pipeline just before the build. pre_clone_script = Script to execute in the pipeline before cloning the Git repository. this can be used to adjust the Git client configuration first, for example. |
object({ |
{} |
no |
security_group_prefix | Set the name prefix and overwrite the Name tag for all security groups. |
string |
"" |
no |
subnet_id | Subnet id used for the Runner and Runner Workers. Must belong to the vpc_id . In case the fleet mode is used, multiple subnets forthe Runner Workers can be provided with runner_worker_docker_machine_instance.subnet_ids. |
string |
n/a | yes |
suppressed_tags | List of tag keys which are automatically removed and never added as default tag by the module. | list(string) |
[] |
no |
tags | Map of tags that will be added to created resources. By default resources will be tagged with name and environment. | map(string) |
{} |
no |
vpc_id | The VPC used for the runner and runner workers. | string |
n/a | yes |
Outputs
Name | Description |
---|---|
runner_agent_role_arn | ARN of the role used for the ec2 instance for the GitLab runner agent. |
runner_agent_role_name | Name of the role used for the ec2 instance for the GitLab runner agent. |
runner_agent_sg_id | ID of the security group attached to the GitLab runner agent. |
runner_as_group_name | Name of the autoscaling group for the gitlab-runner instance |
runner_cache_bucket_arn | ARN of the S3 for the build cache. |
runner_cache_bucket_name | Name of the S3 for the build cache. |
runner_config_toml_rendered | The rendered config.toml given to the Runner Manager. |
runner_eip | EIP of the Gitlab Runner |
runner_launch_template_name | The name of the runner's launch template. |
runner_role_arn | ARN of the role used for the docker machine runners. |
runner_role_name | Name of the role used for the docker machine runners. |
runner_sg_id | ID of the security group attached to the docker machine runners. |
runner_user_data | (Deprecated) The user data of the Gitlab Runner Agent's launch template. Set var.debug.output_runner_user_data_to_file to true to write user_data.sh . |