• Stars
    star
    147
  • Rank 242,494 (Top 5 %)
  • Language HCL
  • License
    Apache License 2.0
  • Created over 5 years ago
  • Updated 11 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems

terraform-aws-s3-bucket GitHub Action Tests Latest Release Slack Community

README Header

Cloud Posse

This module creates an S3 bucket with support for versioning, lifecycles, object locks, replication, encryption, ACL, bucket object policies, and static website hosting.

For backward compatibility, it sets the S3 bucket ACL to private and the s3_object_ownership to ObjectWriter. Moving forward, setting s3_object_ownership to BucketOwnerEnforced is recommended, and doing so automatically disables the ACL.

This module blocks public access to the bucket by default. See block_public_acls, block_public_policy, ignore_public_acls, and restrict_public_buckets to change the settings. See AWS documentation for more details.

This module can optionally create an IAM User with access to the S3 bucket. This is inherently insecure in that to enable anyone to become the User, access keys must be generated, and anything generated by Terraform is stored unencrypted in the Terraform state. See the Terraform documentation for more details

The best way to grant access to the bucket is to grant one or more IAM Roles access to the bucket via privileged_principal_arns. This IAM Role can be assumed by EC2 instances via their Instance Profile, or Kubernetes (EKS) services using IRSA. Entities outside of AWS can assume the Role via OIDC. (See this example of connecting GitHub to enable GitHub actions to assume AWS IAM roles, or use this Cloud Posse component if you are already using the Cloud Posse reference architecture.)

If neither of those approaches work, then as a last resort you can set user_enabled = true and this module will provision a basic IAM user with permissions to access the bucket. We do not recommend creating IAM users this way for any other purpose.

If an IAM user is created, the IAM user name is constructed using terraform-null-label and some input is required. The simplest input is name. By default the name will be converted to lower case and all non-alphanumeric characters except for hyphen will be removed. See the documentation for terraform-null-label to learn how to override these defaults if desired.

If an AWS Access Key is created, it is stored either in SSM Parameter Store or is provided as a module output, but not both. Using SSM Parameter Store is recommended because that will keep the secret from being easily accessible via Terraform remote state lookup, but the key will still be stored unencrypted in the Terraform state in any case.


This project is part of our comprehensive "SweetOps" approach towards DevOps.

Terraform Open Source Modules

It's 100% Open Source and licensed under the APACHE2.

We literally have hundreds of terraform modules that are Open Source and well-maintained. Check them out!

Security & Compliance

Security scanning is graciously provided by Bridgecrew. Bridgecrew is the leading fully hosted, cloud-native solution providing continuous Terraform security and compliance.

Benchmark Description
Infrastructure Security Infrastructure Security Compliance
CIS KUBERNETES Center for Internet Security, KUBERNETES Compliance
CIS AWS Center for Internet Security, AWS Compliance
CIS AZURE Center for Internet Security, AZURE Compliance
PCI-DSS Payment Card Industry Data Security Standards Compliance
NIST-800-53 National Institute of Standards and Technology Compliance
ISO27001 Information Security Management System, ISO/IEC 27001 Compliance
SOC2 Service Organization Control 2 Compliance
CIS GCP Center for Internet Security, GCP Compliance
HIPAA Health Insurance Portability and Accountability Compliance

Usage

IMPORTANT: We do not pin modules to versions in our examples because of the difficulty of keeping the versions in the documentation in sync with the latest released versions. We highly recommend that in your code you pin the version to the exact version you are using so that your infrastructure remains stable, and update versions in a systematic way so that they do not catch you by surprise.

Using BucketOwnerEnforced

module "s3_bucket" {
  source = "cloudposse/s3-bucket/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version = "x.x.x"
  name                     = "app"
  stage                    = "test"
  namespace                = "eg"

  s3_object_ownership      = "BucketOwnerEnforced"
  enabled                  = true
  user_enabled             = false
  versioning_enabled       = false

  privileged_principal_actions   = ["s3:GetObject", "s3:ListBucket", "s3:GetBucketLocation"]
  privileged_principal_arns      = [
    {
      (local.deployment_iam_role_arn) = [""]
    },
    {
      (local.additional_deployment_iam_role_arn) = ["prefix1/", "prefix2/"]
    }
  ]
}

Configuring S3 storage lifecycle:

locals {
  lifecycle_configuration_rules = [{
    enabled = true # bool
    id      = "v2rule"

    abort_incomplete_multipart_upload_days = 1 # number

    filter_and = null
    expiration = {
      days = 120 # integer > 0
    }
    noncurrent_version_expiration = {
      newer_noncurrent_versions = 3  # integer > 0
      noncurrent_days           = 60 # integer >= 0
    }
    transition = [{
      days          = 30            # integer >= 0
      storage_class = "STANDARD_IA" # string/enum, one of GLACIER, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, DEEP_ARCHIVE, GLACIER_IR.
      },
      {
        days          = 60           # integer >= 0
        storage_class = "ONEZONE_IA" # string/enum, one of GLACIER, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, DEEP_ARCHIVE, GLACIER_IR.
    }]
    noncurrent_version_transition = [{
      newer_noncurrent_versions = 3            # integer >= 0
      noncurrent_days           = 30           # integer >= 0
      storage_class             = "ONEZONE_IA" # string/enum, one of GLACIER, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, DEEP_ARCHIVE, GLACIER_IR.
    }]
  }]
}

Allowing specific principal ARNs to perform actions on the bucket:

module "s3_bucket" {
  source = "cloudposse/s3-bucket/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version = "x.x.x"
  s3_object_ownership      = "BucketOwnerEnforced"
  enabled                  = true
  user_enabled             = true
  versioning_enabled       = false
  allowed_bucket_actions   = ["s3:GetObject", "s3:ListBucket", "s3:GetBucketLocation"]
  name                     = "app"
  stage                    = "test"
  namespace                = "eg"

  privileged_principal_arns = [
  {
    "arn:aws:iam::123456789012:role/principal1" = ["prefix1/", "prefix2/"]
  }, {
    "arn:aws:iam::123456789012:role/principal2" = [""]
  }]
  privileged_principal_actions = [
    "s3:PutObject", 
    "s3:PutObjectAcl", 
    "s3:GetObject", 
    "s3:DeleteObject", 
    "s3:ListBucket", 
    "s3:ListBucketMultipartUploads", 
    "s3:GetBucketLocation", 
    "s3:AbortMultipartUpload"
  ]
}

Makefile Targets

Available targets:

  help                                Help screen
  help/all                            Display help for all targets
  help/short                          This help short screen
  lint                                Lint terraform code
  test/%                              Run Terraform commands in the examples/complete folder; e.g. make test/plan

Requirements

Name Version
terraform >= 1.3.0
aws >= 4.9.0
time >= 0.7

Providers

Name Version
aws >= 4.9.0
time >= 0.7

Modules

Name Source Version
s3_user cloudposse/iam-s3-user/aws 1.2.0
this cloudposse/label/null 0.25.0

Resources

Name Type
aws_iam_policy.replication resource
aws_iam_role.replication resource
aws_iam_role_policy_attachment.replication resource
aws_s3_bucket.default resource
aws_s3_bucket_accelerate_configuration.default resource
aws_s3_bucket_acl.default resource
aws_s3_bucket_cors_configuration.default resource
aws_s3_bucket_lifecycle_configuration.default resource
aws_s3_bucket_logging.default resource
aws_s3_bucket_object_lock_configuration.default resource
aws_s3_bucket_ownership_controls.default resource
aws_s3_bucket_policy.default resource
aws_s3_bucket_public_access_block.default resource
aws_s3_bucket_replication_configuration.default resource
aws_s3_bucket_server_side_encryption_configuration.default resource
aws_s3_bucket_versioning.default resource
aws_s3_bucket_website_configuration.default resource
aws_s3_bucket_website_configuration.redirect resource
time_sleep.wait_for_aws_s3_bucket_settings resource
aws_canonical_user_id.default data source
aws_iam_policy_document.aggregated_policy data source
aws_iam_policy_document.bucket_policy data source
aws_iam_policy_document.replication data source
aws_iam_policy_document.replication_sts data source
aws_partition.current data source

Inputs

Name Description Type Default Required
access_key_enabled Set to true to create an IAM Access Key for the created IAM user bool true no
acl The canned ACL to apply.
Deprecated by AWS in favor of bucket policies.
Automatically disabled if s3_object_ownership is set to "BucketOwnerEnforced".
Defaults to "private" for backwards compatibility, but we recommend setting s3_object_ownership to "BucketOwnerEnforced" instead.
string "private" no
additional_tag_map Additional key-value pairs to add to each map in tags_as_list_of_maps. Not added to tags or id.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
map(string) {} no
allow_encrypted_uploads_only Set to true to prevent uploads of unencrypted objects to S3 bucket bool false no
allow_ssl_requests_only Set to true to require requests to use Secure Socket Layer (HTTPS/SSL). This will explicitly deny access to HTTP requests bool false no
allowed_bucket_actions List of actions the user is permitted to perform on the S3 bucket list(string)
[
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:GetBucketLocation",
"s3:AbortMultipartUpload"
]
no
attributes ID element. Additional attributes (e.g. workers or cluster) to add to id,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the delimiter
and treated as a single ID element.
list(string) [] no
block_public_acls Set to false to disable the blocking of new public access lists on the bucket bool true no
block_public_policy Set to false to disable the blocking of new public policies on the bucket bool true no
bucket_key_enabled Set this to true to use Amazon S3 Bucket Keys for SSE-KMS, which may or may not reduce the number of AWS KMS requests.
For more information, see: https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-key.html
bool false no
bucket_name Bucket name. If provided, the bucket will be created with this name instead of generating the name from the context string null no
context Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as null to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
any
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
no
cors_configuration Specifies the allowed headers, methods, origins and exposed headers when using CORS on this bucket
list(object({
id = optional(string)
allowed_headers = optional(list(string))
allowed_methods = optional(list(string))
allowed_origins = optional(list(string))
expose_headers = optional(list(string))
max_age_seconds = optional(number)
}))
[] no
delimiter Delimiter to be used between ID elements.
Defaults to - (hyphen). Set to "" to use no delimiter at all.
string null no
descriptor_formats Describe additional descriptors to be output in the descriptors output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
{<br> format = string<br> labels = list(string)<br>}
(Type is any so the map values can later be enhanced to provide additional options.)
format is a Terraform format string to be passed to the format() function.
labels is a list of labels, in order, to pass to format() function.
Label values will be normalized before being passed to format() so they will be
identical to how they appear in id.
Default is {} (descriptors output will be empty).
any {} no
enabled Set to false to prevent the module from creating any resources bool null no
environment ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT' string null no
force_destroy When true, permits a non-empty S3 bucket to be deleted by first deleting all objects in the bucket.
THESE OBJECTS ARE NOT RECOVERABLE even if they were versioned and stored in Glacier.
bool false no
grants A list of policy grants for the bucket, taking a list of permissions.
Conflicts with acl. Set acl to null to use this.
Deprecated by AWS in favor of bucket policies.
Automatically disabled if s3_object_ownership is set to "BucketOwnerEnforced".
list(object({
id = string
type = string
permissions = list(string)
uri = string
}))
[] no
id_length_limit Limit id to this many characters (minimum 6).
Set to 0 for unlimited length.
Set to null for keep the existing setting, which defaults to 0.
Does not affect id_full.
number null no
ignore_public_acls Set to false to disable the ignoring of public access lists on the bucket bool true no
kms_master_key_arn The AWS KMS master key ARN used for the SSE-KMS encryption. This can only be used when you set the value of sse_algorithm as aws:kms. The default aws/s3 AWS KMS master key is used if this element is absent while the sse_algorithm is aws:kms string "" no
label_key_case Controls the letter case of the tags keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the tags input.
Possible values: lower, title, upper.
Default value: title.
string null no
label_order The order in which the labels (ID elements) appear in the id.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
list(string) null no
label_value_case Controls the letter case of ID elements (labels) as included in id,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the tags input.
Possible values: lower, title, upper and none (no transformation).
Set this to title and set delimiter to "" to yield Pascal Case IDs.
Default value: lower.
string null no
labels_as_tags Set of labels (ID elements) to include as tags in the tags output.
Default is to include all labels.
Tags with empty values will not be included in the tags output.
Set to [] to suppress all generated tags.
Notes:
The value of the name tag, if included, will be the id, not the name.
Unlike other null-label inputs, the initial setting of labels_as_tags cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
set(string)
[
"default"
]
no
lifecycle_configuration_rules A list of lifecycle V2 rules
list(object({
enabled = optional(bool, true)
id = string

abort_incomplete_multipart_upload_days = optional(number)

# filter_and is the and configuration block inside the filter configuration.
# This is the only place you should specify a prefix.
filter_and = optional(object({
object_size_greater_than = optional(number) # integer >= 0
object_size_less_than = optional(number) # integer >= 1
prefix = optional(string)
tags = optional(map(string), {})
}))
expiration = optional(object({
date = optional(string) # string, RFC3339 time format, GMT
days = optional(number) # integer > 0
expired_object_delete_marker = optional(bool)
}))
noncurrent_version_expiration = optional(object({
newer_noncurrent_versions = optional(number) # integer > 0
noncurrent_days = optional(number) # integer >= 0
}))
transition = optional(list(object({
date = optional(string) # string, RFC3339 time format, GMT
days = optional(number) # integer > 0
storage_class = optional(string)
# string/enum, one of GLACIER, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, DEEP_ARCHIVE, GLACIER_IR.
})), [])

noncurrent_version_transition = optional(list(object({
newer_noncurrent_versions = optional(number) # integer >= 0
noncurrent_days = optional(number) # integer >= 0
storage_class = optional(string)
# string/enum, one of GLACIER, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, DEEP_ARCHIVE, GLACIER_IR.
})), [])
}))
[] no
lifecycle_rule_ids DEPRECATED (use lifecycle_configuration_rules): A list of IDs to assign to corresponding lifecycle_rules list(string) [] no
lifecycle_rules DEPRECATED (use lifecycle_configuration_rules): A list of lifecycle rules
list(object({
prefix = string
enabled = bool
tags = map(string)

enable_glacier_transition = bool
enable_deeparchive_transition = bool
enable_standard_ia_transition = bool
enable_current_object_expiration = bool
enable_noncurrent_version_expiration = bool

abort_incomplete_multipart_upload_days = number
noncurrent_version_glacier_transition_days = number
noncurrent_version_deeparchive_transition_days = number
noncurrent_version_expiration_days = number

standard_transition_days = number
glacier_transition_days = number
deeparchive_transition_days = number
expiration_days = number
}))
null no
logging Bucket access logging configuration. Empty list for no logging, list of 1 to enable logging.
list(object({
bucket_name = string
prefix = string
}))
[] no
name ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a tag.
The "name" tag is set to the full id string. There is no tag with the value of the name input.
string null no
namespace ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique string null no
object_lock_configuration A configuration for S3 object locking. With S3 Object Lock, you can store objects using a write once, read many (WORM) model. Object Lock can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.
object({
mode = string # Valid values are GOVERNANCE and COMPLIANCE.
days = number
years = number
})
null no
privileged_principal_actions List of actions to permit privileged_principal_arns to perform on bucket and bucket prefixes (see privileged_principal_arns) list(string) [] no
privileged_principal_arns List of maps. Each map has a key, an IAM Principal ARN, whose associated value is
a list of S3 path prefixes to grant privileged_principal_actions permissions for that principal,
in addition to the bucket itself, which is automatically included. Prefixes should not begin with '/'.
list(map(list(string))) [] no
regex_replace_chars Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, "/[^a-zA-Z0-9-]/" is used to remove all characters other than hyphens, letters and digits.
string null no
replication_rules DEPRECATED (use s3_replication_rules): Specifies the replication rules for S3 bucket replication if enabled. You must also set s3_replication_enabled to true. list(any) null no
restrict_public_buckets Set to false to disable the restricting of making the bucket public bool true no
s3_object_ownership Specifies the S3 object ownership control.
Valid values are ObjectWriter, BucketOwnerPreferred, and 'BucketOwnerEnforced'.
Defaults to "ObjectWriter" for backwards compatibility, but we recommend setting "BucketOwnerEnforced" instead.
string "ObjectWriter" no
s3_replica_bucket_arn A single S3 bucket ARN to use for all replication rules.
Note: The destination bucket can be specified in the replication rule itself
(which allows for multiple destinations), in which case it will take precedence over this variable.
string "" no
s3_replication_enabled Set this to true and specify s3_replication_rules to enable replication. versioning_enabled must also be true. bool false no
s3_replication_permissions_boundary_arn Permissions boundary ARN for the created IAM replication role. string null no
s3_replication_rules Specifies the replication rules for S3 bucket replication if enabled. You must also set s3_replication_enabled to true.
list(object({
id = optional(string)
priority = optional(number)
prefix = optional(string)
status = optional(string, "Enabled")
# delete_marker_replication { status } had been flattened for convenience
delete_marker_replication_status = optional(string, "Disabled")
# Add the configuration as it appears in the resource, for consistency
# this nested version takes precedence if both are provided.
delete_marker_replication = optional(object({
status = string
}))

# destination_bucket is specified here rather than inside the destination object because before optional
# attributes, it made it easier to work with the Terraform type system and create a list of consistent type.
# It is preserved for backward compatibility, but the nested version takes priority if both are provided.
destination_bucket = optional(string) # destination bucket ARN, overrides s3_replica_bucket_arn

destination = object({
bucket = optional(string) # destination bucket ARN, overrides s3_replica_bucket_arn
storage_class = optional(string, "STANDARD")
# replica_kms_key_id at this level is for backward compatibility, and is overridden by the one in encryption_configuration
replica_kms_key_id = optional(string, "")
encryption_configuration = optional(object({
replica_kms_key_id = string
}))
access_control_translation = optional(object({
owner = string
}))
# account_id is for backward compatibility, overridden by account
account_id = optional(string)
account = optional(string)
# For convenience, specifying either metrics or replication_time enables both
metrics = optional(object({
event_threshold = optional(object({
minutes = optional(number, 15) # Currently 15 is the only valid number
}), { minutes = 15 })
status = optional(string, "Enabled")
}), { status = "Disabled" })
# To preserve backward compatibility, Replication Time Control (RTC) is automatically enabled
# when metrics are enabled. To enable metrics without RTC, you must explicitly configure
# replication_time.status = "Disabled".
replication_time = optional(object({
time = optional(object({
minutes = optional(number, 15) # Currently 15 is the only valid number
}), { minutes = 15 })
status = optional(string)
}))
})

source_selection_criteria = optional(object({
replica_modifications = optional(object({
status = string # Either Enabled or Disabled
}))
sse_kms_encrypted_objects = optional(object({
status = optional(string)
}))
}))
# filter.prefix overrides top level prefix
filter = optional(object({
prefix = optional(string)
tags = optional(map(string), {})
}))
}))
null no
s3_replication_source_roles Cross-account IAM Role ARNs that will be allowed to perform S3 replication to this bucket (for replication within the same AWS account, it's not necessary to adjust the bucket policy). list(string) [] no
source_policy_documents List of IAM policy documents (in JSON) that are merged together into the exported document.
Statements defined in source_policy_documents must have unique SIDs.
Statement having SIDs that match policy SIDs generated by this module will override them.
list(string) [] no
sse_algorithm The server-side encryption algorithm to use. Valid values are AES256 and aws:kms string "AES256" no
ssm_base_path The base path for SSM parameters where created IAM user's access key is stored string "/s3_user/" no
stage ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release' string null no
store_access_key_in_ssm Set to true to store the created IAM user's access key in SSM Parameter Store,
false to store them in Terraform state as outputs.
Since Terraform state would contain the secrets in plaintext,
use of SSM Parameter Store is recommended.
bool false no
tags Additional tags (e.g. {'BusinessUnit': 'XYZ'}).
Neither the tag keys nor the tag values will be modified by this module.
map(string) {} no
tenant ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for string null no
transfer_acceleration_enabled Set this to true to enable S3 Transfer Acceleration for the bucket.
Note: When this is set to false Terraform does not perform drift detection
and will not disable Transfer Acceleration if it was enabled outside of Terraform.
To disable it via Terraform, you must set this to true and then to false.
Note: not all regions support Transfer Acceleration.
bool false no
user_enabled Set to true to create an IAM user with permission to access the bucket bool false no
user_permissions_boundary_arn Permission boundary ARN for the IAM user created to access the bucket. string null no
versioning_enabled A state of versioning. Versioning is a means of keeping multiple variants of an object in the same bucket bool true no
website_configuration Specifies the static website hosting configuration object
list(object({
index_document = string
error_document = string
routing_rules = list(object({
condition = object({
http_error_code_returned_equals = string
key_prefix_equals = string
})
redirect = object({
host_name = string
http_redirect_code = string
protocol = string
replace_key_prefix_with = string
replace_key_with = string
})
}))
}))
[] no
website_redirect_all_requests_to If provided, all website requests will be redirected to the specified host name and protocol
list(object({
host_name = string
protocol = string
}))
[] no

Outputs

Name Description
access_key_id The access key ID, if var.user_enabled && var.access_key_enabled.
While sensitive, it does not need to be kept secret, so this is output regardless of var.store_access_key_in_ssm.
access_key_id_ssm_path The SSM Path under which the S3 User's access key ID is stored
bucket_arn Bucket ARN
bucket_domain_name FQDN of bucket
bucket_id Bucket Name (aka ID)
bucket_region Bucket region
bucket_regional_domain_name The bucket region-specific domain name
bucket_website_domain The bucket website domain, if website is enabled
bucket_website_endpoint The bucket website endpoint, if website is enabled
enabled Is module enabled
replication_role_arn The ARN of the replication IAM Role
secret_access_key The secret access key will be output if created and not stored in SSM. However, the secret access key, if created,
will be written to the Terraform state file unencrypted, regardless of any other settings.
See the Terraform documentation for more details.
secret_access_key_ssm_path The SSM Path under which the S3 User's secret access key is stored
user_arn The ARN assigned by AWS for the user
user_enabled Is user creation enabled
user_name Normalized IAM user name
user_unique_id The user unique ID assigned by AWS

Share the Love

Like this project? Please give it a ★ on our GitHub! (it helps us a lot)

Are you using this project or any of our other projects? Consider leaving a testimonial. =)

Related Projects

Check out these related projects.

Help

Got a question? We got answers.

File a GitHub issue, send us an email or join our Slack Community.

README Commercial Support

DevOps Accelerator for Startups

We are a DevOps Accelerator. We'll help you build your cloud infrastructure from the ground up so you can own it. Then we'll show you how to operate it and stick around for as long as you need us.

Learn More

Work directly with our team of DevOps experts via email, slack, and video conferencing.

We deliver 10x the value for a fraction of the cost of a full-time engineer. Our track record is not even funny. If you want things done right and you need it done FAST, then we're your best bet.

  • Reference Architecture. You'll get everything you need from the ground up built using 100% infrastructure as code.
  • Release Engineering. You'll have end-to-end CI/CD with unlimited staging environments.
  • Site Reliability Engineering. You'll have total visibility into your apps and microservices.
  • Security Baseline. You'll have built-in governance with accountability and audit logs for all changes.
  • GitOps. You'll be able to operate your infrastructure via Pull Requests.
  • Training. You'll receive hands-on training so your team can operate what we build.
  • Questions. You'll have a direct line of communication between our teams via a Shared Slack channel.
  • Troubleshooting. You'll get help to triage when things aren't working.
  • Code Reviews. You'll receive constructive feedback on Pull Requests.
  • Bug Fixes. We'll rapidly work with you to fix any bugs in our projects.

Slack Community

Join our Open Source Community on Slack. It's FREE for everyone! Our "SweetOps" community is where you get to talk with others who share a similar vision for how to rollout and manage infrastructure. This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build totally sweet infrastructure.

Discourse Forums

Participate in our Discourse Forums. Here you'll find answers to commonly asked questions. Most questions will be related to the enormous number of projects we support on our GitHub. Come here to collaborate on answers, find solutions, and get ideas about the products and services we value. It only takes a minute to get started! Just sign in with SSO using your GitHub account.

Newsletter

Sign up for our newsletter that covers everything on our technology radar. Receive updates on what we're up to on GitHub as well as awesome new projects we discover.

Office Hours

Join us every Wednesday via Zoom for our weekly "Lunch & Learn" sessions. It's FREE for everyone!

zoom

Contributing

Bug Reports & Feature Requests

Please use the issue tracker to report any bugs or file feature requests.

Developing

If you are interested in being a contributor and want to get involved in developing this project or help out with our other projects, we would love to hear from you! Shoot us an email.

In general, PRs are welcome. We follow the typical "fork-and-pull" Git workflow.

  1. Fork the repo on GitHub
  2. Clone the project to your own machine
  3. Commit changes to your own branch
  4. Push your work back up to your fork
  5. Submit a Pull Request so that we can review your changes

NOTE: Be sure to merge the latest changes from "upstream" before making a pull request!

Copyright

Copyright © 2017-2023 Cloud Posse, LLC

License

License

See LICENSE for full details.

Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.

Trademarks

All other trademarks referenced herein are the property of their respective owners.

About

This project is maintained and funded by Cloud Posse, LLC. Like it? Please let us know by leaving a testimonial!

Cloud Posse

We're a DevOps Professional Services company based in Los Angeles, CA. We ❤️ Open Source Software.

We offer paid support on all of our projects.

Check out our other projects, follow us on twitter, apply for a job, or hire us to help with your cloud strategy and implementation.

Contributors

Erik Osterman
Erik Osterman
Andriy Knysh
Andriy Knysh
Maxim Mironenko
Maxim Mironenko
Josh Myers
Josh Myers
Yonatan Koren
Yonatan Koren
Nuru
Nuru

README Footer Beacon

More Repositories

1

geodesic

🚀 Geodesic is a DevOps Linux Toolbox in Docker
Shell
915
star
2

bastion

🔒Secure Bastion implemented as Docker Container running Alpine Linux with Google Authenticator & DUO MFA support
Shell
623
star
3

terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes])
HCL
516
star
4

atmos

👽 Workflow automation tool for DevOps. Keep configuration DRY with hierarchical imports of configurations, inheritance, and WAY more. Native support for Terraform and Helmfile.
Go
490
star
5

terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster
HCL
453
star
6

terraform-aws-components

Opinionated, self-contained Terraform root modules that each solve one, specific problem
HCL
403
star
7

build-harness

Collection of Makefiles to facilitate building Golang projects, Dockerfiles, Helm charts, and more
Makefile
348
star
8

terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the `terraform.tfstate` file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption.
HCL
344
star
9

terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource
HCL
316
star
10

terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment
HCL
292
star
11

terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin
HCL
255
star
12

helmfiles

Comprehensive Distribution of Helmfiles for Kubernetes
Makefile
250
star
13

terraform-aws-jenkins

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack
HCL
250
star
14

terraform-aws-vpc

Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways
HCL
212
star
15

terraform-aws-elasticsearch

Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash.
HCL
211
star
16

terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more.
HCL
206
star
17

terraform-aws-cloudtrail-cloudwatch-alarms

Terraform module for creating alarms for tracking important changes and occurrences from cloudtrail.
HCL
193
star
18

tfmask

Terraform utility to mask select output from `terraform plan` and `terraform apply`
Go
191
star
19

terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build
HCL
185
star
20

copyright-header

© Copyright Header is a utility to manipulate software licenses on source code.
Ruby
177
star
21

terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR
HCL
170
star
22

terraform-aws-dynamic-subnets

Terraform module for public and private subnets provisioning in existing VPC
HCL
165
star
23

prometheus-to-cloudwatch

Utility for scraping Prometheus metrics from a Prometheus client endpoint and publishing them to CloudWatch
Go
159
star
24

reference-architectures

[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process.
HCL
154
star
25

charts

The "Cloud Posse" Distribution of Kubernetes Applications
Mustache
149
star
26

terraform-null-ansible

Terraform Module to run ansible playbooks
HCL
146
star
27

terraform-aws-ec2-instance

Terraform module for provisioning a general purpose EC2 host
HCL
143
star
28

terraform-aws-key-pair

Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys)
HCL
141
star
29

terraform-aws-ecs-codepipeline

Terraform Module for CI/CD with AWS Code Pipeline and Code Build for ECS https://cloudposse.com/
HCL
139
star
30

terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres
HCL
135
star
31

terraform-aws-rds

Terraform module to provision AWS RDS instances
HCL
134
star
32

github-authorized-keys

Use GitHub teams to manage system user accounts and authorized_keys
Go
131
star
33

terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB.
HCL
129
star
34

terraform-aws-elasticache-redis

Terraform module to provision an ElastiCache Redis Cluster
HCL
129
star
35

packages

Cloud Posse DevOps distribution of linux packages for native apps, binaries, alpine packages, debian packages, and redhat packages.
Shell
125
star
36

terraform-example-module

Example Terraform Module Scaffolding
HCL
125
star
37

terraform-aws-ec2-bastion-server

Terraform module to define a generic Bastion host with parameterized user_data and support for AWS SSM Session Manager for remote access with IAM authentication.
HCL
124
star
38

tfenv

Transform environment variables for use with Terraform (e.g. `HOSTNAME` ⇨ `TF_VAR_hostname`)
Go
123
star
39

terraform-terraform-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes])
HCL
116
star
40

terraform-aws-s3-website

Terraform Module for Creating S3 backed Websites and Route53 DNS
HCL
114
star
41

terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS
HCL
113
star
42

terraform-aws-vpc-peering-multi-account

Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers
HCL
108
star
43

terraform-aws-vpc-peering

Terraform module to create a peering connection between two VPCs in the same AWS account.
HCL
105
star
44

github-commenter

Command line utility for creating GitHub comments on Commits, Pull Request Reviews or Issues
Go
104
star
45

terraform-aws-rds-cloudwatch-sns-alarms

Terraform module that configures important RDS alerts using CloudWatch and sends them to an SNS topic
HCL
103
star
46

terraform-aws-s3-log-storage

This module creates an S3 bucket suitable for receiving logs from other AWS services such as S3, CloudFront, and CloudTrail
HCL
103
star
47

terraform-aws-iam-role

A Terraform module that creates IAM role with provided JSON IAM polices documents.
HCL
101
star
48

github-status-updater

Command line utility for updating GitHub commit statuses and enabling required status checks for pull requests
Go
100
star
49

terraform-aws-codebuild

Terraform Module to easily leverage AWS CodeBuild for Continuous Integration
HCL
96
star
50

terraform-aws-alb

Terraform module to provision a standard ALB for HTTP/HTTP traffic
HCL
94
star
51

terraform-aws-cloudfront-cdn

Terraform Module that implements a CloudFront Distribution (CDN) for a custom origin.
HCL
93
star
52

terraform-aws-ssm-parameter-store

Terraform module to populate AWS Systems Manager (SSM) Parameter Store with values from Terraform. Works great with Chamber.
HCL
93
star
53

terraform-aws-acm-request-certificate

Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validation
HCL
93
star
54

terraform-provider-utils

The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management)
Go
93
star
55

terraform-aws-multi-az-subnets

Terraform module for multi-AZ public and private subnets provisioning
HCL
90
star
56

terraform-aws-cloudtrail

Terraform module to provision an AWS CloudTrail and an encrypted S3 bucket with versioning to store CloudTrail logs
HCL
90
star
57

sudosh

Shell wrapper to run a login shell with `sudo` as the current user for the purpose of audit logging
Go
88
star
58

terraform-aws-backup

Terraform module to provision AWS Backup, a fully managed backup service that makes it easy to centralize and automate the back up of data across AWS services such as EBS volumes, RDS databases, DynamoDB tables, EFS file systems, and AWS Storage Gateway volumes.
HCL
87
star
59

terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers
HCL
84
star
60

terraform-aws-eks-node-group

Terraform module to provision a fully managed AWS EKS Node Group
HCL
82
star
61

terraform-aws-efs

Terraform Module to define an EFS Filesystem (aka NFS)
HCL
79
star
62

terraform-datadog-platform

Terraform module to configure and provision Datadog monitors, custom RBAC roles with permissions, Datadog synthetic tests, Datadog child organizations, and other Datadog resources from a YAML configuration, complete with automated tests.
HCL
79
star
63

terraform-aws-iam-system-user

Terraform Module to Provision a Basic IAM System User Suitable for CI/CD Systems (E.g. TravisCI, CircleCI)
HCL
76
star
64

terraform-aws-sso

Terraform module to configure AWS Single Sign-On (SSO)
HCL
76
star
65

terraform-aws-dynamodb

Terraform module that implements AWS DynamoDB with support for AutoScaling
HCL
72
star
66

terraform-aws-emr-cluster

Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS
HCL
70
star
67

terraform-aws-msk-apache-kafka-cluster

Terraform module to provision AWS MSK
HCL
68
star
68

terraform-yaml-config

Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps
HCL
66
star
69

terraform-aws-iam-user

Terraform Module to provision a basic IAM user suitable for humans.
HCL
66
star
70

slack-notifier

Command line utility to send messages with attachments to Slack channels via Incoming Webhooks
Go
65
star
71

terraform-aws-cloudwatch-logs

Terraform Module to Provide a CloudWatch Logs Endpoint
HCL
61
star
72

terraform-aws-kms-key

Terraform module to provision a KMS key with alias
HCL
61
star
73

actions

Our Library of GitHub Actions
TypeScript
57
star
74

terraform-aws-iam-s3-user

Terraform module to provision a basic IAM user with permissions to access S3 resources, e.g. to give the user read/write/delete access to the objects in an S3 bucket
HCL
53
star
75

load-testing

A collection of best practices, workflows, scripts and scenarios that Cloud Posse uses for load and performance testing of websites and applications (in particular those deployed on Kubernetes clusters)
JavaScript
52
star
76

docs

📘 SweetOps documentation for the Cloud Posse way of doing Infrastructure as Code. https://docs.cloudposse.com
Python
51
star
77

terraform-aws-documentdb-cluster

Terraform module to provision a DocumentDB cluster on AWS
HCL
51
star
78

terraform-aws-iam-policy-document-aggregator

Terraform module to aggregate multiple IAM policy documents into single policy document.
HCL
50
star
79

terraform-aws-vpn-connection

Terraform module to provision a site-to-site VPN connection between a VPC and an on-premises network
HCL
49
star
80

terraform-aws-route53-alias

Terraform Module to Define Vanity Host/Domain (e.g. `brand.com`) as an ALIAS record
HCL
48
star
81

terraform-aws-ecs-atlantis

Terraform module for deploying Atlantis as an ECS Task
HCL
47
star
82

terraform-aws-cloudtrail-s3-bucket

S3 bucket with built in IAM policy to allow CloudTrail logs
HCL
47
star
83

terraform-yaml-stack-config

Terraform module that loads an opinionated "stack" configuration from local or remote YAML sources. It supports deep-merged variables, settings, ENV variables, backend config, and remote state outputs for Terraform and helmfile components.
HCL
47
star
84

terraform-aws-transit-gateway

Terraform module to provision AWS Transit Gateway, AWS Resource Access Manager (AWS RAM) Resource, and share the Transit Gateway with the Organization or another AWS Account.
HCL
46
star
85

terraform-aws-route53-cluster-zone

Terraform module to easily define consistent cluster domains on Route53 (e.g. `prod.ourcompany.com`)
HCL
46
star
86

terraform-aws-named-subnets

Terraform module for named subnets provisioning.
HCL
45
star
87

terraform-aws-route53-cluster-hostname

Terraform module to define a consistent AWS Route53 hostname
HCL
45
star
88

terraform-aws-elastic-beanstalk-application

Terraform Module to define an ElasticBeanstalk Application
HCL
44
star
89

terraform-aws-config

This module configures AWS Config, a service that enables you to assess, audit, and evaluate the configurations of your AWS resources.
HCL
43
star
90

terraform-aws-eks-fargate-profile

Terraform module to provision an EKS Fargate Profile
HCL
42
star
91

terraform-aws-efs-backup

Terraform module designed to easily backup EFS filesystems to S3 using DataPipeline
HCL
41
star
92

terraform-aws-sns-topic

Terraform Module to Provide an Amazon Simple Notification Service (SNS)
HCL
40
star
93

terraform-aws-service-control-policies

Terraform module to provision Service Control Policies (SCP) for AWS Organizations, Organizational Units, and AWS accounts
HCL
38
star
94

terraform-aws-cloudformation-stack

Terraform module to provision CloudFormation Stack
HCL
38
star
95

terraform-aws-ec2-client-vpn

HCL
37
star
96

terraform-provider-awsutils

Terraform provider to help with various AWS automation tasks (mostly all that stuff we cannot accomplish with the official AWS terraform provider)
Go
36
star
97

terraform-aws-utils

Utility functions for use with Terraform in the AWS environment
HCL
36
star
98

terraform-aws-ecs-cloudwatch-sns-alarms

Terraform module to create CloudWatch Alarms on ECS Service level metrics.
HCL
36
star
99

terraform-aws-iam-assumed-roles

Terraform Module for Assumed Roles on AWS with IAM Groups Requiring MFA
HCL
33
star
100

terraform-aws-mq-broker

Terraform module for provisioning an AmazonMQ broker
HCL
33
star