• Stars
    star
    255
  • Rank 153,974 (Top 4 %)
  • Language HCL
  • License
    Apache License 2.0
  • Created over 6 years ago
  • Updated 11 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Terraform module to easily provision CloudFront CDN backed by an S3 origin

terraform-aws-cloudfront-s3-cdn Codefresh Build Status Latest Release Slack Community

README Header

Cloud Posse

Terraform module to provision an AWS CloudFront CDN with an S3 origin.


This project is part of our comprehensive "SweetOps" approach towards DevOps.

Terraform Open Source Modules

It's 100% Open Source and licensed under the APACHE2.

We literally have hundreds of terraform modules that are Open Source and well-maintained. Check them out!

Security & Compliance

Security scanning is graciously provided by Bridgecrew. Bridgecrew is the leading fully hosted, cloud-native solution providing continuous Terraform security and compliance.

Benchmark Description
Infrastructure Security Infrastructure Security Compliance
CIS KUBERNETES Center for Internet Security, KUBERNETES Compliance
CIS AWS Center for Internet Security, AWS Compliance
CIS AZURE Center for Internet Security, AZURE Compliance
PCI-DSS Payment Card Industry Data Security Standards Compliance
NIST-800-53 National Institute of Standards and Technology Compliance
ISO27001 Information Security Management System, ISO/IEC 27001 Compliance
SOC2 Service Organization Control 2 Compliance
CIS GCP Center for Internet Security, GCP Compliance
HIPAA Health Insurance Portability and Accountability Compliance

Usage

IMPORTANT: We do not pin modules to versions in our examples because of the difficulty of keeping the versions in the documentation in sync with the latest released versions. We highly recommend that in your code you pin the version to the exact version you are using so that your infrastructure remains stable, and update versions in a systematic way so that they do not catch you by surprise.

Also, because of a bug in the Terraform registry (hashicorp/terraform#21417), the registry shows many of our inputs as required when in fact they are optional. The table below correctly indicates which inputs are required.

For a complete example, see examples/complete.

For automated tests of the complete example using bats and Terratest (which tests and deploys the example on AWS), see test.

The following will create a new s3 bucket eg-prod-app for a cloudfront cdn, and allow principal1 to upload to prefix1 and prefix2, while allowing principal2 to manage the whole bucket.

module "cdn" {
  source = "cloudposse/cloudfront-s3-cdn/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version = "x.x.x"

  namespace         = "eg"
  stage             = "prod"
  name              = "app"
  aliases           = ["assets.cloudposse.com"]
  dns_alias_enabled = true
  parent_zone_name  = "cloudposse.com"

  deployment_principal_arns = {
    "arn:aws:iam::123456789012:role/principal1" = ["prefix1/", "prefix2/"]
    "arn:aws:iam::123456789012:role/principal2" = [""]
  }
}

The following will reuse an existing s3 bucket eg-prod-app for a cloudfront cdn.

module "cdn" {
  source = "cloudposse/cloudfront-s3-cdn/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version = "x.x.x"

  origin_bucket     = "eg-prod-app"
  aliases           = ["assets.cloudposse.com"]
  dns_alias_enabled = true
  parent_zone_name  = "cloudposse.com"
}

The following will create an Origin Group with the origin created by this module as a primary origin and an additional S3 bucket as a failover origin.

module "s3_bucket" {
  source  = "cloudposse/s3-bucket/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version = "x.x.x"

  attributes = ["failover-assets"]
}

module "cdn" {
  source = "cloudposse/cloudfront-s3-cdn/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version = "x.x.x"

  aliases           = ["assets.cloudposse.com"]
  dns_alias_enabled = true
  parent_zone_name  = "cloudposse.com"
  s3_origins = {
    domain_name = module.s3_bucket.bucket_regional_domain_name
    origin_id   = module.s3_bucket.bucket_id
    origin_path = null
    s3_origin_config = {
      origin_access_identity = null # will get translated to the origin_access_identity used by the origin created by this module.
    }
  }
  origin_groups = {
    primary_origin_id  = null # will get translated to the origin id of the origin created by this module.
    failover_origin_id = module.s3_bucket.bucket_id
    failover_criteria  = [
      403,
      404,
      500,
      502
    ]
  }
}

Background on CDNs, "Origins", S3 Buckets, and Web Servers

CDNs and Origin Servers

There are some settings you need to be aware of when using this module. In order to understand the settings, you need to understand some of the basics of CDNs and web servers, so we are providing this highly simplified explanation of how they work in order for you to understand the implications of the settings you are providing.

A "CDN" (Content Distribution Network) is a collection of servers scattered around the internet with the aim of making it faster for people to retrieve content from a website. The details of why that is wanted/needed are beyond the scope of this document, as are most of the details of how a CDN is implemented. For this discussion, we will simply treat a CDN as a set of web servers all serving the same content to different users.

In a normal web server (again, greatly simplified), you place files on the server and the web server software receives requests from browsers and responds with the contents of the files.

For a variety of reasons, the web servers in a CDN do not work the way normal web servers work. Instead of getting their content from files on the local server, the CDN web servers get their content by acting like web browsers (proxies). When they get a request from a browser, they make the same request to what is called an "Origin Server". It is called an origin server because it serves the original content of the website, and thus is the origin of the content.

As a website publisher, you put content on an Origin Server (which users usually should be prevented from accessing) and configure your CDN to use your Origin Server. Then you direct users to a URL hosted by your CDN provider, the users' browsers connect to the CDN, the CDN gets the content from your Origin Server, your Origin Server gets the content from a file on the server, and the data gets sent back hop by hop to the user. (The reason this ends up being a good idea is that the CDN can cache the content for a while, serving multiple users the same content while only contacting the origin server once.)

S3 Buckets: file storage and web server

S3 buckets were originally designed just to store files, and they are still most often used for that. The have a lot of access controls to make it possible to strictly limit who can read what files in the bucket, so that companies can store sensitive information there. You may have heard of a number of "data breaches" being caused by misconfigured permissions on S3 buckets, making them publicly accessible. As a result of that, Amazon has some extra settings on top of everything else to keep S3 buckets from being publicly accessible, which is usually a good thing.

However, at some point someone realized that since these files were in the cloud, and Amazon already had these web servers running to provide access to the files in the cloud, it was only a tiny leap to turn an S3 bucket into a web server. So now S3 buckets can be published as websites with a few configuration settings, including making the contents publicly accessible.

Web servers, files, and the different modes of S3 buckets

In the simplest websites, the URL "path" (the part after the site name) corresponds directly to the path (under a special directory we will call /webroot) and name of a file on the web server. So if the web server gets a request for "http://example.com/foo/bar/baz.html" it will look for a file /webroot/foo/bar/baz.html. If it exists, the server will return its contents, and if it does not exist, the server will return a Not Found error. An S3 bucket, whether configured as a file store or a website, will always do both of these things.

Web servers, however, do some helpful extra things. To name a few:

  • If the URL ends with a /, as in http://example.com/foo/bar/, the web server (depending on how it is configured) will either return a list of files in the directory or it will return the contents of a file in the directory with a special name (by default, index.html) if it exists.
  • If the URL does not end with a / but the last part, instead of being a file name, is a directory name, the web server will redirect the user to the URL with the / at the end instead of saying the file was Not Found. This redirect will get you to the index.html file we just talked about. Given the way people pass URLs around, this turns out to be quite helpful.
  • If the URL does not point to a directory or a file, instead of just sending back a cryptic Not Found error code, it can return the contents of a special file called an "error document".

Your Critical Decision: S3 bucket or website?

All of this background is to help you decide how to set website_enabled and s3_website_password_enabled. The default for website_enabled is false which is the easiest to configure and the most secure, and with this setting, s3_website_password_enabled is ignored.

S3 buckets, in file storage mode (website_enabled = false), do none of these extra things that web servers do. If the URL points to a file, it will return the file, and if it does not exactly match a file, it will return Not Found. One big advantage, though, is that the S3 bucket can remain private (not publicly accessible). A second, related advantage is that you can limit the website to a portion of the S3 bucket (everything under a certain prefix) and keep the contents under the the other prefixes private.

S3 buckets configured as static websites (website_enabled = true), however, have these extra web server features like redirects, index.html, and error documents. The disadvantage is that you have to make the entire bucket public (although you can still restrict access to some portions of the bucket).

Another feature or drawback (depending on your point of view) of S3 buckets configured as static websites is that they are directly accessible via their website endpoint as well as through Cloudfront. This module has a feature, s3_website_password_enabled, that requires a password be passed in the HTTP request header and configures the CDN to do that, which will make it much harder to access the S3 website directly. So set s3_website_password_enabled = true to limit direct access to the S3 website or set it to false if you want to be able to bypass Cloudfront when you want to.

In addition to setting website_enabled=true, you must also:

  • Specify at least one aliases, like ["example.com"] or ["example.com", "www.example.com"]
  • Specify an ACM certificate

Custom Domain Names and Generating a TLS Certificate with ACM

When you set up Cloudfront, Amazon will generate a domain name for your website. You amost certainly will not want to publish that. Instead, you will want to use a custom domain name. This module refers to them as "aliases".

To use the custom domain names, you need to

  • Pass them in as aliases so that Cloudfront will respond to them with your content
  • Create CNAMEs for the aliases to point to the Cloudfront domain name. If your alias domains are hosted by Route53 and you have IAM permissions to modify them, this module will set that up for you if you set dns_alias_enabled = true.
  • Generate a TLS Certificate via ACM that includes the all the aliases and pass the ARN for the certificate in acm_certificate_arn. Note that for Cloudfront, the certificate has to be provisioned in the us-east-1 region regardless of where any other resources are.
# For cloudfront, the acm has to be created in us-east-1 or it will not work
provider "aws" {
  region = "us-east-1"
  alias  = "aws.us-east-1"
}

# create acm and explicitly set it to us-east-1 provider
module "acm_request_certificate" {
  source = "cloudposse/acm-request-certificate/aws"
  providers = {
    aws = aws.us-east-1
  }

  # Cloud Posse recommends pinning every module to a specific version
  # version = "x.x.x"
  domain_name                       = "example.com"
  subject_alternative_names         = ["a.example.com", "b.example.com", "*.c.example.com"]
  process_domain_validation_options = true
  ttl                               = "300"
}

module "cdn" {
  source = "cloudposse/cloudfront-s3-cdn/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version     = "x.x.x"
  namespace         = "eg"
  stage             = "prod"
  name              = "app"
  aliases           = ["assets.cloudposse.com"]
  dns_alias_enabled = true
  parent_zone_name  = "cloudposse.com"

  acm_certificate_arn = module.acm_request_certificate.arn

  depends_on = [module.acm_request_certificate]
}

Or use the AWS cli to request new ACM certifiates (requires email validation)

aws acm request-certificate --domain-name example.com --subject-alternative-names a.example.com b.example.com *.c.example.com

NOTE:

Although AWS Certificate Manager is supported in many AWS regions, to use an SSL certificate with CloudFront, it should be requested only in US East (N. Virginia) region.

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cnames-and-https-requirements.html

If you want to require HTTPS between viewers and CloudFront, you must change the AWS region to US East (N. Virginia) in the AWS Certificate Manager console before you request or import a certificate.

https://docs.aws.amazon.com/acm/latest/userguide/acm-regions.html

To use an ACM Certificate with Amazon CloudFront, you must request or import the certificate in the US East (N. Virginia) region. ACM Certificates in this region that are associated with a CloudFront distribution are distributed to all the geographic locations configured for that distribution.

This is a fundamental requirement of CloudFront, and you will need to request the certificate in us-east-1 region.

If there are warnings around the outputs when destroying using this module. Then you can use this method for supressing the superfluous errors. TF_WARN_OUTPUT_ERRORS=1 terraform destroy

Lambda@Edge

This module also features a Lambda@Edge submodule. Its lambda_function_association output is meant to feed directly into the variable of the same name in the parent module.

provider "aws" {
  region = var.region
}

provider "aws" {
  region = "us-east-1"
  alias  = "us-east-1"
}

module "lambda_at_edge" {
  source = "cloudposse/cloudfront-s3-cdn/aws//modules/lambda@edge"
  # Cloud Posse recommends pinning every module to a specific version
  # version = "x.x.x"

  functions = {
    origin_request = {
      source = [{
        content  = <<-EOT
        'use strict';

        exports.handler = (event, context, callback) => {

          //Get contents of response
          const response = event.Records[0].cf.response;
          const headers = response.headers;

          //Set new headers
          headers['strict-transport-security'] = [{key: 'Strict-Transport-Security', value: 'max-age=63072000; includeSubdomains; preload'}];
          headers['content-security-policy'] = [{key: 'Content-Security-Policy', value: "default-src 'none'; img-src 'self'; script-src 'self'; style-src 'self'; object-src 'none'"}];
          headers['x-content-type-options'] = [{key: 'X-Content-Type-Options', value: 'nosniff'}];
          headers['x-frame-options'] = [{key: 'X-Frame-Options', value: 'DENY'}];
          headers['x-xss-protection'] = [{key: 'X-XSS-Protection', value: '1; mode=block'}];
          headers['referrer-policy'] = [{key: 'Referrer-Policy', value: 'same-origin'}];

          //Return modified response
          callback(null, response);
        };
        EOT
        filename = "index.js"
      }]
      runtime      = "nodejs12.x"
      handler      = "index.handler"
      event_type   = "origin-response"
      include_body = false
    }
  }

  # An AWS Provider configured for us-east-1 must be passed to the module, as Lambda@Edge functions must exist in us-east-1
  providers = {
    aws = aws.us-east-1
  }

  context = module.this.context
}


module "cdn" {
  source = "cloudposse/cloudfront-s3-cdn/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version = "x.x.x"

  ...
  lambda_function_association = module.lambda_at_edge.lambda_function_association
}

Makefile Targets

Available targets:

  help                                Help screen
  help/all                            Display help for all targets
  help/short                          This help short screen
  lint                                Lint terraform code

Requirements

Name Version
terraform >= 1.3
aws >= 3.64.0, != 4.0.0, != 4.1.0, != 4.2.0, != 4.3.0, != 4.4.0, != 4.5.0, != 4.6.0, != 4.7.0, != 4.8.0
random >= 2.2
time >= 0.7

Providers

Name Version
aws >= 3.64.0, != 4.0.0, != 4.1.0, != 4.2.0, != 4.3.0, != 4.4.0, != 4.5.0, != 4.6.0, != 4.7.0, != 4.8.0
random >= 2.2
time >= 0.7

Modules

Name Source Version
dns cloudposse/route53-alias/aws 0.13.0
logs cloudposse/s3-log-storage/aws 0.26.0
origin_label cloudposse/label/null 0.25.0
this cloudposse/label/null 0.25.0

Resources

Name Type
aws_cloudfront_distribution.default resource
aws_cloudfront_origin_access_identity.default resource
aws_s3_bucket.origin resource
aws_s3_bucket_ownership_controls.origin resource
aws_s3_bucket_policy.default resource
aws_s3_bucket_public_access_block.origin resource
random_password.referer resource
time_sleep.wait_for_aws_s3_bucket_settings resource
aws_iam_policy_document.combined data source
aws_iam_policy_document.deployment data source
aws_iam_policy_document.s3_origin data source
aws_iam_policy_document.s3_ssl_only data source
aws_iam_policy_document.s3_website_origin data source
aws_partition.current data source
aws_region.current data source
aws_s3_bucket.cf_logs data source
aws_s3_bucket.origin data source

Inputs

Name Description Type Default Required
access_log_bucket_name DEPRECATED. Use s3_access_log_bucket_name instead. string null no
acm_certificate_arn Existing ACM Certificate ARN string "" no
additional_bucket_policy Additional policies for the bucket. If included in the policies, the variables ${bucket_name}, ${origin_path} and ${cloudfront_origin_access_identity_iam_arn} will be substituted.
It is also possible to override the default policy statements by providing statements with S3GetObjectForCloudFront and S3ListBucketForCloudFront sid.
string "{}" no
additional_tag_map Additional key-value pairs to add to each map in tags_as_list_of_maps. Not added to tags or id.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
map(string) {} no
aliases List of FQDN's - Used to set the Alternate Domain Names (CNAMEs) setting on Cloudfront list(string) [] no
allow_ssl_requests_only Set to true to require requests to use Secure Socket Layer (HTTPS/SSL). This will explicitly deny access to HTTP requests bool true no
allowed_methods List of allowed methods (e.g. GET, PUT, POST, DELETE, HEAD) for AWS CloudFront list(string)
[
"DELETE",
"GET",
"HEAD",
"OPTIONS",
"PATCH",
"POST",
"PUT"
]
no
attributes ID element. Additional attributes (e.g. workers or cluster) to add to id,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the delimiter
and treated as a single ID element.
list(string) [] no
block_origin_public_access_enabled When set to 'true' the s3 origin bucket will have public access block enabled bool false no
cache_policy_id The unique identifier of the existing cache policy to attach to the default cache behavior.
If not provided, this module will add a default cache policy using other provided inputs.
string null no
cached_methods List of cached methods (e.g. GET, PUT, POST, DELETE, HEAD) list(string)
[
"GET",
"HEAD"
]
no
cloudfront_access_log_bucket_name When cloudfront_access_log_create_bucket is false, this is the name of the existing S3 Bucket where
Cloudfront Access Logs are to be delivered and is required. IGNORED when cloudfront_access_log_create_bucket is true.
string "" no
cloudfront_access_log_create_bucket When true and cloudfront_access_logging_enabled is also true, this module will create a new,
separate S3 bucket to receive Cloudfront Access Logs.
bool true no
cloudfront_access_log_include_cookies Set true to include cookies in Cloudfront Access Logs bool false no
cloudfront_access_log_prefix Prefix to use for Cloudfront Access Log object keys. Defaults to no prefix. string "" no
cloudfront_access_logging_enabled Set true to enable delivery of Cloudfront Access Logs to an S3 bucket bool true no
cloudfront_origin_access_identity_iam_arn Existing cloudfront origin access identity iam arn that is supplied in the s3 bucket policy string "" no
cloudfront_origin_access_identity_path Existing cloudfront origin access identity path used in the cloudfront distribution's s3_origin_config content string "" no
comment Comment for the origin access identity string "Managed by Terraform" no
compress Compress content for web requests that include Accept-Encoding: gzip in the request header bool true no
context Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as null to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
any
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
no
cors_allowed_headers List of allowed headers for S3 bucket list(string)
[
"*"
]
no
cors_allowed_methods List of allowed methods (e.g. GET, PUT, POST, DELETE, HEAD) for S3 bucket list(string)
[
"GET"
]
no
cors_allowed_origins List of allowed origins (e.g. example.com, test.com) for S3 bucket list(string) [] no
cors_expose_headers List of expose header in the response for S3 bucket list(string)
[
"ETag"
]
no
cors_max_age_seconds Time in seconds that browser can cache the response for S3 bucket number 3600 no
custom_error_response List of one or more custom error response element maps
list(object({
error_caching_min_ttl = string
error_code = string
response_code = string
response_page_path = string
}))
[] no
custom_origin_headers A list of origin header parameters that will be sent to origin list(object({ name = string, value = string })) [] no
custom_origins A list of additional custom website origins for this distribution.
list(object({
domain_name = string
origin_id = string
origin_path = string
custom_headers = list(object({
name = string
value = string
}))
custom_origin_config = object({
http_port = number
https_port = number
origin_protocol_policy = string
origin_ssl_protocols = list(string)
origin_keepalive_timeout = number
origin_read_timeout = number
})
}))
[] no
default_root_object Object that CloudFront return when requests the root URL string "index.html" no
default_ttl Default amount of time (in seconds) that an object is in a CloudFront cache number 60 no
delimiter Delimiter to be used between ID elements.
Defaults to - (hyphen). Set to "" to use no delimiter at all.
string null no
deployment_actions List of actions to permit deployment_principal_arns to perform on bucket and bucket prefixes (see deployment_principal_arns) list(string)
[
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:GetBucketLocation",
"s3:AbortMultipartUpload"
]
no
deployment_principal_arns (Optional) Map of IAM Principal ARNs to lists of S3 path prefixes to grant deployment_actions permissions.
Resource list will include the bucket itself along with all the prefixes. Prefixes should not begin with '/'.
map(list(string)) {} no
descriptor_formats Describe additional descriptors to be output in the descriptors output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
{<br> format = string<br> labels = list(string)<br>}
(Type is any so the map values can later be enhanced to provide additional options.)
format is a Terraform format string to be passed to the format() function.
labels is a list of labels, in order, to pass to format() function.
Label values will be normalized before being passed to format() so they will be
identical to how they appear in id.
Default is {} (descriptors output will be empty).
any {} no
distribution_enabled Set to false to create the distribution but still prevent CloudFront from serving requests. bool true no
dns_alias_enabled Create a DNS alias for the CDN. Requires parent_zone_id or parent_zone_name bool false no
dns_allow_overwrite Allow creation of DNS records in Terraform to overwrite an existing record, if any. This does not affect the ability to update the record in Terraform and does not prevent other resources within Terraform or manual Route 53 changes outside Terraform from overwriting this record. false by default. This configuration is not recommended for most environments bool false no
enabled Set to false to prevent the module from creating any resources bool null no
encryption_enabled When set to 'true' the resource will have aes256 encryption enabled by default bool true no
environment ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT' string null no
error_document An absolute path to the document to return in case of a 4XX error string "" no
external_aliases List of FQDN's - Used to set the Alternate Domain Names (CNAMEs) setting on Cloudfront. No new route53 records will be created for these list(string) [] no
extra_logs_attributes Additional attributes to add to the end of the generated Cloudfront Access Log S3 Bucket name.
Only effective if cloudfront_access_log_create_bucket is true.
list(string)
[
"logs"
]
no
extra_origin_attributes Additional attributes to put onto the origin label list(string)
[
"origin"
]
no
forward_cookies Specifies whether you want CloudFront to forward all or no cookies to the origin. Can be 'all' or 'none' string "none" no
forward_header_values A list of whitelisted header values to forward to the origin (incompatible with cache_policy_id) list(string)
[
"Access-Control-Request-Headers",
"Access-Control-Request-Method",
"Origin"
]
no
forward_query_string Forward query strings to the origin that is associated with this cache behavior (incompatible with cache_policy_id) bool false no
function_association A config block that triggers a CloudFront function with specific actions.
See the aws_cloudfront_distribution
documentation for more information.
list(object({
event_type = string
function_arn = string
}))
[] no
geo_restriction_locations List of country codes for which CloudFront either to distribute content (whitelist) or not distribute your content (blacklist) list(string) [] no
geo_restriction_type Method that use to restrict distribution of your content by country: none, whitelist, or blacklist string "none" no
http_version The maximum HTTP version to support on the distribution. Allowed values are http1.1, http2, http2and3 and http3 string "http2" no
id_length_limit Limit id to this many characters (minimum 6).
Set to 0 for unlimited length.
Set to null for keep the existing setting, which defaults to 0.
Does not affect id_full.
number null no
index_document Amazon S3 returns this index document when requests are made to the root domain or any of the subfolders string "index.html" no
ipv6_enabled Set to true to enable an AAAA DNS record to be set as well as the A record bool true no
label_key_case Controls the letter case of the tags keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the tags input.
Possible values: lower, title, upper.
Default value: title.
string null no
label_order The order in which the labels (ID elements) appear in the id.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
list(string) null no
label_value_case Controls the letter case of ID elements (labels) as included in id,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the tags input.
Possible values: lower, title, upper and none (no transformation).
Set this to title and set delimiter to "" to yield Pascal Case IDs.
Default value: lower.
string null no
labels_as_tags Set of labels (ID elements) to include as tags in the tags output.
Default is to include all labels.
Tags with empty values will not be included in the tags output.
Set to [] to suppress all generated tags.
Notes:
The value of the name tag, if included, will be the id, not the name.
Unlike other null-label inputs, the initial setting of labels_as_tags cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
set(string)
[
"default"
]
no
lambda_function_association A config block that triggers a lambda@edge function with specific actions
list(object({
event_type = string
include_body = bool
lambda_arn = string
}))
[] no
log_expiration_days Number of days after object creation to expire Cloudfront Access Log objects.
Only effective if cloudfront_access_log_create_bucket is true.
number 90 no
log_glacier_transition_days Number of days after object creation to move Cloudfront Access Log objects to the glacier tier.
Only effective if cloudfront_access_log_create_bucket is true.
number 60 no
log_include_cookies DEPRECATED. Use cloudfront_access_log_include_cookies instead. bool null no
log_prefix DEPRECATED. Use cloudfront_access_log_prefix instead. string null no
log_standard_transition_days Number of days after object creation to move Cloudfront Access Log objects to the infrequent access tier.
Only effective if cloudfront_access_log_create_bucket is true.
number 30 no
log_versioning_enabled Set true to enable object versioning in the created Cloudfront Access Log S3 Bucket.
Only effective if cloudfront_access_log_create_bucket is true.
bool false no
logging_enabled DEPRECATED. Use cloudfront_access_logging_enabled instead. bool null no
max_ttl Maximum amount of time (in seconds) that an object is in a CloudFront cache number 31536000 no
min_ttl Minimum amount of time that you want objects to stay in CloudFront caches number 0 no
minimum_protocol_version Cloudfront TLS minimum protocol version.
If var.acm_certificate_arn is unset, only "TLSv1" can be specified. See: AWS Cloudfront create-distribution documentation
and Supported protocols and ciphers between viewers and CloudFront for more information.
Defaults to "TLSv1.2_2019" unless var.acm_certificate_arn is unset, in which case it defaults to TLSv1
string "" no
name ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a tag.
The "name" tag is set to the full id string. There is no tag with the value of the name input.
string null no
namespace ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique string null no
ordered_cache An ordered list of cache behaviors resource for this distribution.
List in order of precedence (first match wins). This is in addition to the default cache policy.
Set target_origin_id to "" to specify the S3 bucket origin created by this module.
list(object({
target_origin_id = string
path_pattern = string

allowed_methods = list(string)
cached_methods = list(string)
compress = bool
trusted_signers = list(string)
trusted_key_groups = list(string)

cache_policy_id = string
origin_request_policy_id = string

viewer_protocol_policy = string
min_ttl = number
default_ttl = number
max_ttl = number
response_headers_policy_id = string

forward_query_string = bool
forward_header_values = list(string)
forward_cookies = string
forward_cookies_whitelisted_names = list(string)

lambda_function_association = list(object({
event_type = string
include_body = bool
lambda_arn = string
}))

function_association = list(object({
event_type = string
function_arn = string
}))
}))
[] no
origin_bucket Name of an existing S3 bucket to use as the origin. If this is not provided, it will create a new s3 bucket using var.name and other context related inputs string null no
origin_force_destroy Delete all objects from the bucket so that the bucket can be destroyed without error (e.g. true or false) bool false no
origin_groups List of Origin Groups to create in the distribution.
The values of primary_origin_id and failover_origin_id must correspond to origin IDs existing in var.s3_origins or var.custom_origins.

If primary_origin_id is set to null or "", then the origin id of the origin created by this module will be used in its place.
This is to allow for the use case of making the origin created by this module the primary origin in an origin group.
list(object({
primary_origin_id = string
failover_origin_id = string
failover_criteria = list(string)
}))
[] no
origin_path An optional element that causes CloudFront to request your content from a directory in your Amazon S3 bucket or your custom origin. It must begin with a /. Do not add a / at the end of the path. string "" no
origin_request_policy_id The unique identifier of the origin request policy that is attached to the behavior.
Should be used in conjunction with cache_policy_id.
string null no
origin_shield_enabled If enabled, origin shield will be enabled for the default origin bool false no
origin_ssl_protocols The SSL/TLS protocols that you want CloudFront to use when communicating with your origin over HTTPS. list(string)
[
"TLSv1",
"TLSv1.1",
"TLSv1.2"
]
no
override_origin_bucket_policy When using an existing origin bucket (through var.origin_bucket), setting this to 'false' will make it so the existing bucket policy will not be overriden bool true no
parent_zone_id ID of the hosted zone to contain this record (or specify parent_zone_name). Requires dns_alias_enabled set to true string "" no
parent_zone_name Name of the hosted zone to contain this record (or specify parent_zone_id). Requires dns_alias_enabled set to true string "" no
price_class Price class for this distribution: PriceClass_All, PriceClass_200, PriceClass_100 string "PriceClass_100" no
query_string_cache_keys When forward_query_string is enabled, only the query string keys listed in this argument are cached (incompatible with cache_policy_id) list(string) [] no
realtime_log_config_arn The ARN of the real-time log configuration that is attached to this cache behavior string null no
redirect_all_requests_to A hostname to redirect all website requests for this distribution to. If this is set, it overrides other website settings string "" no
regex_replace_chars Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, "/[^a-zA-Z0-9-]/" is used to remove all characters other than hyphens, letters and digits.
string null no
response_headers_policy_id The identifier for a response headers policy string "" no
routing_rules A json array containing routing rules describing redirect behavior and when redirects are applied string "" no
s3_access_log_bucket_name Name of the existing S3 bucket where S3 Access Logs will be delivered. Default is not to enable S3 Access Logging. string "" no
s3_access_log_prefix Prefix to use for S3 Access Log object keys. Defaults to logs/${module.this.id} string "" no
s3_access_logging_enabled Set true to deliver S3 Access Logs to the s3_access_log_bucket_name bucket.
Defaults to false if s3_access_log_bucket_name is empty (the default), true otherwise.
Must be set explicitly if the access log bucket is being created at the same time as this module is being invoked.
bool null no
s3_object_ownership Specifies the S3 object ownership control on the origin bucket. Valid values are ObjectWriter, BucketOwnerPreferred, and 'BucketOwnerEnforced'. string "ObjectWriter" no
s3_origins A list of S3 origins (in addition to the one created by this module) for this distribution.
S3 buckets configured as websites are custom_origins, not s3_origins.
Specifying s3_origin_config.origin_access_identity as null or "" will have it translated to the origin_access_identity used by the origin created by the module.
list(object({
domain_name = string
origin_id = string
origin_path = string
s3_origin_config = object({
origin_access_identity = string
})
}))
[] no
s3_website_password_enabled If set to true, and website_enabled is also true, a password will be required in the Referrer field of the
HTTP request in order to access the website, and Cloudfront will be configured to pass this password in its requests.
This will make it much harder for people to bypass Cloudfront and access the S3 website directly via its website endpoint.
bool false no
stage ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release' string null no
tags Additional tags (e.g. {'BusinessUnit': 'XYZ'}).
Neither the tag keys nor the tag values will be modified by this module.
map(string) {} no
tenant ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for string null no
trusted_key_groups A list of key group IDs that CloudFront can use to validate signed URLs or signed cookies. list(string) [] no
trusted_signers The AWS accounts, if any, that you want to allow to create signed URLs for private content. 'self' is acceptable. list(string) [] no
versioning_enabled When set to 'true' the s3 origin bucket will have versioning enabled bool true no
viewer_protocol_policy Limit the protocol users can use to access content. One of allow-all, https-only, or redirect-to-https string "redirect-to-https" no
wait_for_deployment When set to 'true' the resource will wait for the distribution status to change from InProgress to Deployed bool true no
web_acl_id ID of the AWS WAF web ACL that is associated with the distribution string "" no
website_enabled Set to true to enable the created S3 bucket to serve as a website independently of Cloudfront,
and to use that website as the origin. See the README for details and caveats. See also s3_website_password_enabled.
bool false no

Outputs

Name Description
aliases Aliases of the CloudFront distribution.
cf_arn ARN of AWS CloudFront distribution
cf_domain_name Domain name corresponding to the distribution
cf_etag Current version of the distribution's information
cf_hosted_zone_id CloudFront Route 53 zone ID
cf_id ID of AWS CloudFront distribution
cf_identity_iam_arn CloudFront Origin Access Identity IAM ARN
cf_origin_groups List of Origin Groups in the CloudFront distribution.
cf_origin_ids List of Origin IDs in the CloudFront distribution.
cf_primary_origin_id The ID of the origin created by this module.
cf_s3_canonical_user_id Canonical user ID for CloudFront Origin Access Identity
cf_status Current status of the distribution
logs Log bucket resource
s3_bucket Name of origin S3 bucket
s3_bucket_arn ARN of origin S3 bucket
s3_bucket_domain_name Domain of origin S3 bucket
s3_bucket_policy Final computed S3 bucket policy

Share the Love

Like this project? Please give it a β˜… on our GitHub! (it helps us a lot)

Are you using this project or any of our other projects? Consider leaving a testimonial. =)

Related Projects

Check out these related projects.

Help

Got a question? We got answers.

File a GitHub issue, send us an email or join our Slack Community.

README Commercial Support

DevOps Accelerator for Startups

We are a DevOps Accelerator. We'll help you build your cloud infrastructure from the ground up so you can own it. Then we'll show you how to operate it and stick around for as long as you need us.

Learn More

Work directly with our team of DevOps experts via email, slack, and video conferencing.

We deliver 10x the value for a fraction of the cost of a full-time engineer. Our track record is not even funny. If you want things done right and you need it done FAST, then we're your best bet.

  • Reference Architecture. You'll get everything you need from the ground up built using 100% infrastructure as code.
  • Release Engineering. You'll have end-to-end CI/CD with unlimited staging environments.
  • Site Reliability Engineering. You'll have total visibility into your apps and microservices.
  • Security Baseline. You'll have built-in governance with accountability and audit logs for all changes.
  • GitOps. You'll be able to operate your infrastructure via Pull Requests.
  • Training. You'll receive hands-on training so your team can operate what we build.
  • Questions. You'll have a direct line of communication between our teams via a Shared Slack channel.
  • Troubleshooting. You'll get help to triage when things aren't working.
  • Code Reviews. You'll receive constructive feedback on Pull Requests.
  • Bug Fixes. We'll rapidly work with you to fix any bugs in our projects.

Slack Community

Join our Open Source Community on Slack. It's FREE for everyone! Our "SweetOps" community is where you get to talk with others who share a similar vision for how to rollout and manage infrastructure. This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build totally sweet infrastructure.

Discourse Forums

Participate in our Discourse Forums. Here you'll find answers to commonly asked questions. Most questions will be related to the enormous number of projects we support on our GitHub. Come here to collaborate on answers, find solutions, and get ideas about the products and services we value. It only takes a minute to get started! Just sign in with SSO using your GitHub account.

Newsletter

Sign up for our newsletter that covers everything on our technology radar. Receive updates on what we're up to on GitHub as well as awesome new projects we discover.

Office Hours

Join us every Wednesday via Zoom for our weekly "Lunch & Learn" sessions. It's FREE for everyone!

zoom

Contributing

Bug Reports & Feature Requests

Please use the issue tracker to report any bugs or file feature requests.

Developing

If you are interested in being a contributor and want to get involved in developing this project or help out with our other projects, we would love to hear from you! Shoot us an email.

In general, PRs are welcome. We follow the typical "fork-and-pull" Git workflow.

  1. Fork the repo on GitHub
  2. Clone the project to your own machine
  3. Commit changes to your own branch
  4. Push your work back up to your fork
  5. Submit a Pull Request so that we can review your changes

NOTE: Be sure to merge the latest changes from "upstream" before making a pull request!

Copyright

Copyright Β© 2017-2023 Cloud Posse, LLC

License

License

See LICENSE for full details.

Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.

Trademarks

All other trademarks referenced herein are the property of their respective owners.

About

This project is maintained and funded by Cloud Posse, LLC. Like it? Please let us know by leaving a testimonial!

Cloud Posse

We're a DevOps Professional Services company based in Los Angeles, CA. We ❀️ Open Source Software.

We offer paid support on all of our projects.

Check out our other projects, follow us on twitter, apply for a job, or hire us to help with your cloud strategy and implementation.

Contributors

Erik Osterman
Erik Osterman
Andriy Knysh
Andriy Knysh
Jamie Nelson
Jamie Nelson
Clive Zagno
Clive Zagno
David Mattia
David Mattia
RB
RB
John McGehee
John McGehee
Yonatan Koren
Yonatan Koren
Lucas Caparelli
Lucas Caparelli

README Footer Beacon

More Repositories

1

geodesic

πŸš€ Geodesic is a DevOps Linux Toolbox in Docker
Shell
915
star
2

bastion

πŸ”’Secure Bastion implemented as Docker Container running Alpine Linux with Google Authenticator & DUO MFA support
Shell
623
star
3

terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes])
HCL
516
star
4

atmos

πŸ‘½ Workflow automation tool for DevOps. Keep configuration DRY with hierarchical imports of configurations, inheritance, and WAY more. Native support for Terraform and Helmfile.
Go
490
star
5

terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster
HCL
453
star
6

terraform-aws-components

Opinionated, self-contained Terraform root modules that each solve one, specific problem
HCL
403
star
7

build-harness

Collection of Makefiles to facilitate building Golang projects, Dockerfiles, Helm charts, and more
Makefile
348
star
8

terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the `terraform.tfstate` file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption.
HCL
344
star
9

terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource
HCL
316
star
10

terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment
HCL
292
star
11

helmfiles

Comprehensive Distribution of Helmfiles for Kubernetes
Makefile
250
star
12

terraform-aws-jenkins

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack
HCL
250
star
13

terraform-aws-vpc

Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways
HCL
212
star
14

terraform-aws-elasticsearch

Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash.
HCL
211
star
15

terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more.
HCL
206
star
16

terraform-aws-cloudtrail-cloudwatch-alarms

Terraform module for creating alarms for tracking important changes and occurrences from cloudtrail.
HCL
193
star
17

tfmask

Terraform utility to mask select output from `terraform plan` and `terraform apply`
Go
191
star
18

terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build
HCL
185
star
19

copyright-header

Β© Copyright Header is a utility to manipulate software licenses on source code.
Ruby
177
star
20

terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR
HCL
170
star
21

terraform-aws-dynamic-subnets

Terraform module for public and private subnets provisioning in existing VPC
HCL
165
star
22

prometheus-to-cloudwatch

Utility for scraping Prometheus metrics from a Prometheus client endpoint and publishing them to CloudWatch
Go
159
star
23

reference-architectures

[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process.
HCL
154
star
24

charts

The "Cloud Posse" Distribution of Kubernetes Applications
Mustache
149
star
25

terraform-aws-s3-bucket

Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems
HCL
147
star
26

terraform-null-ansible

Terraform Module to run ansible playbooks
HCL
146
star
27

terraform-aws-ec2-instance

Terraform module for provisioning a general purpose EC2 host
HCL
143
star
28

terraform-aws-key-pair

Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys)
HCL
141
star
29

terraform-aws-ecs-codepipeline

Terraform Module for CI/CD with AWS Code Pipeline and Code Build for ECS https://cloudposse.com/
HCL
139
star
30

terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres
HCL
135
star
31

terraform-aws-rds

Terraform module to provision AWS RDS instances
HCL
134
star
32

github-authorized-keys

Use GitHub teams to manage system user accounts and authorized_keys
Go
131
star
33

terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB.
HCL
129
star
34

terraform-aws-elasticache-redis

Terraform module to provision an ElastiCache Redis Cluster
HCL
129
star
35

packages

Cloud Posse DevOps distribution of linux packages for native apps, binaries, alpine packages, debian packages, and redhat packages.
Shell
125
star
36

terraform-example-module

Example Terraform Module Scaffolding
HCL
125
star
37

terraform-aws-ec2-bastion-server

Terraform module to define a generic Bastion host with parameterized user_data and support for AWS SSM Session Manager for remote access with IAM authentication.
HCL
124
star
38

tfenv

Transform environment variables for use with Terraform (e.g. `HOSTNAME` ⇨ `TF_VAR_hostname`)
Go
123
star
39

terraform-terraform-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes])
HCL
116
star
40

terraform-aws-s3-website

Terraform Module for Creating S3 backed Websites and Route53 DNS
HCL
114
star
41

terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS
HCL
113
star
42

terraform-aws-vpc-peering-multi-account

Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers
HCL
108
star
43

terraform-aws-vpc-peering

Terraform module to create a peering connection between two VPCs in the same AWS account.
HCL
105
star
44

github-commenter

Command line utility for creating GitHub comments on Commits, Pull Request Reviews or Issues
Go
104
star
45

terraform-aws-rds-cloudwatch-sns-alarms

Terraform module that configures important RDS alerts using CloudWatch and sends them to an SNS topic
HCL
103
star
46

terraform-aws-s3-log-storage

This module creates an S3 bucket suitable for receiving logs from other AWS services such as S3, CloudFront, and CloudTrail
HCL
103
star
47

terraform-aws-iam-role

A Terraform module that creates IAM role with provided JSON IAM polices documents.
HCL
101
star
48

github-status-updater

Command line utility for updating GitHub commit statuses and enabling required status checks for pull requests
Go
100
star
49

terraform-aws-codebuild

Terraform Module to easily leverage AWS CodeBuild for Continuous Integration
HCL
96
star
50

terraform-aws-alb

Terraform module to provision a standard ALB for HTTP/HTTP traffic
HCL
94
star
51

terraform-aws-cloudfront-cdn

Terraform Module that implements a CloudFront Distribution (CDN) for a custom origin.
HCL
93
star
52

terraform-aws-ssm-parameter-store

Terraform module to populate AWS Systems Manager (SSM) Parameter Store with values from Terraform. Works great with Chamber.
HCL
93
star
53

terraform-aws-acm-request-certificate

Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validation
HCL
93
star
54

terraform-provider-utils

The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management)
Go
93
star
55

terraform-aws-multi-az-subnets

Terraform module for multi-AZ public and private subnets provisioning
HCL
90
star
56

terraform-aws-cloudtrail

Terraform module to provision an AWS CloudTrail and an encrypted S3 bucket with versioning to store CloudTrail logs
HCL
90
star
57

sudosh

Shell wrapper to run a login shell with `sudo` as the current user for the purpose of audit logging
Go
88
star
58

terraform-aws-backup

Terraform module to provision AWS Backup, a fully managed backup service that makes it easy to centralize and automate the back up of data across AWS services such as EBS volumes, RDS databases, DynamoDB tables, EFS file systems, and AWS Storage Gateway volumes.
HCL
87
star
59

terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers
HCL
84
star
60

terraform-aws-eks-node-group

Terraform module to provision a fully managed AWS EKS Node Group
HCL
82
star
61

terraform-aws-efs

Terraform Module to define an EFS Filesystem (aka NFS)
HCL
79
star
62

terraform-datadog-platform

Terraform module to configure and provision Datadog monitors, custom RBAC roles with permissions, Datadog synthetic tests, Datadog child organizations, and other Datadog resources from a YAML configuration, complete with automated tests.
HCL
79
star
63

terraform-aws-iam-system-user

Terraform Module to Provision a Basic IAM System User Suitable for CI/CD Systems (E.g. TravisCI, CircleCI)
HCL
76
star
64

terraform-aws-sso

Terraform module to configure AWS Single Sign-On (SSO)
HCL
76
star
65

terraform-aws-dynamodb

Terraform module that implements AWS DynamoDB with support for AutoScaling
HCL
72
star
66

terraform-aws-emr-cluster

Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS
HCL
70
star
67

terraform-aws-msk-apache-kafka-cluster

Terraform module to provision AWS MSK
HCL
68
star
68

terraform-yaml-config

Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps
HCL
66
star
69

terraform-aws-iam-user

Terraform Module to provision a basic IAM user suitable for humans.
HCL
66
star
70

slack-notifier

Command line utility to send messages with attachments to Slack channels via Incoming Webhooks
Go
65
star
71

terraform-aws-cloudwatch-logs

Terraform Module to Provide a CloudWatch Logs Endpoint
HCL
61
star
72

terraform-aws-kms-key

Terraform module to provision a KMS key with alias
HCL
61
star
73

actions

Our Library of GitHub Actions
TypeScript
58
star
74

terraform-aws-iam-s3-user

Terraform module to provision a basic IAM user with permissions to access S3 resources, e.g. to give the user read/write/delete access to the objects in an S3 bucket
HCL
53
star
75

load-testing

A collection of best practices, workflows, scripts and scenarios that Cloud Posse uses for load and performance testing of websites and applications (in particular those deployed on Kubernetes clusters)
JavaScript
52
star
76

docs

πŸ“˜ SweetOps documentation for the Cloud Posse way of doing Infrastructure as Code. https://docs.cloudposse.com
Python
51
star
77

terraform-aws-documentdb-cluster

Terraform module to provision a DocumentDB cluster on AWS
HCL
51
star
78

terraform-aws-iam-policy-document-aggregator

Terraform module to aggregate multiple IAM policy documents into single policy document.
HCL
50
star
79

terraform-aws-vpn-connection

Terraform module to provision a site-to-site VPN connection between a VPC and an on-premises network
HCL
49
star
80

terraform-aws-route53-alias

Terraform Module to Define Vanity Host/Domain (e.g. `brand.com`) as an ALIAS record
HCL
48
star
81

terraform-aws-ecs-atlantis

Terraform module for deploying Atlantis as an ECS Task
HCL
47
star
82

terraform-aws-cloudtrail-s3-bucket

S3 bucket with built in IAM policy to allow CloudTrail logs
HCL
47
star
83

terraform-yaml-stack-config

Terraform module that loads an opinionated "stack" configuration from local or remote YAML sources. It supports deep-merged variables, settings, ENV variables, backend config, and remote state outputs for Terraform and helmfile components.
HCL
47
star
84

terraform-aws-transit-gateway

Terraform module to provision AWS Transit Gateway, AWS Resource Access Manager (AWS RAM) Resource, and share the Transit Gateway with the Organization or another AWS Account.
HCL
46
star
85

terraform-aws-route53-cluster-zone

Terraform module to easily define consistent cluster domains on Route53 (e.g. `prod.ourcompany.com`)
HCL
46
star
86

terraform-aws-named-subnets

Terraform module for named subnets provisioning.
HCL
45
star
87

terraform-aws-route53-cluster-hostname

Terraform module to define a consistent AWS Route53 hostname
HCL
45
star
88

terraform-aws-elastic-beanstalk-application

Terraform Module to define an ElasticBeanstalk Application
HCL
44
star
89

terraform-aws-config

This module configures AWS Config, a service that enables you to assess, audit, and evaluate the configurations of your AWS resources.
HCL
43
star
90

terraform-aws-eks-fargate-profile

Terraform module to provision an EKS Fargate Profile
HCL
42
star
91

terraform-aws-efs-backup

Terraform module designed to easily backup EFS filesystems to S3 using DataPipeline
HCL
41
star
92

terraform-aws-sns-topic

Terraform Module to Provide an Amazon Simple Notification Service (SNS)
HCL
40
star
93

terraform-aws-service-control-policies

Terraform module to provision Service Control Policies (SCP) for AWS Organizations, Organizational Units, and AWS accounts
HCL
38
star
94

terraform-aws-cloudformation-stack

Terraform module to provision CloudFormation Stack
HCL
38
star
95

terraform-provider-awsutils

Terraform provider to help with various AWS automation tasks (mostly all that stuff we cannot accomplish with the official AWS terraform provider)
Go
37
star
96

terraform-aws-ec2-client-vpn

HCL
37
star
97

terraform-aws-utils

Utility functions for use with Terraform in the AWS environment
HCL
36
star
98

terraform-aws-ecs-cloudwatch-sns-alarms

Terraform module to create CloudWatch Alarms on ECS Service level metrics.
HCL
36
star
99

terraform-aws-iam-assumed-roles

Terraform Module for Assumed Roles on AWS with IAM Groups Requiring MFA
HCL
33
star
100

terraform-aws-mq-broker

Terraform module for provisioning an AmazonMQ broker
HCL
33
star