• Stars
    star
    229
  • Rank 170,614 (Top 4 %)
  • Language
    Go
  • License
    Apache License 2.0
  • Created about 4 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Manage AWS EKS clusters using Terraform and eksctl

terraform-provider-eksctl

Manage AWS EKS clusters using Terraform and eksctl.

Benefits:

  • terraform apply to bring up your whole infrastructure.
  • No more generating eksctl cluster.yaml with Terraform and a glue shell script just for integration between TF and eksctl.
  • Support for using the same pod IAM role across clusters
    • Useful for e.g. swapping the ArgoCD cluster without changing the target clusters.

Features:

Installation

For Terraform 0.12:

Install the terraform-provider-eksctl binary under .terraform/plugins/${OS}_${ARCH}, so that the binary is at e.g. ${WORKSPACE}/.terraform/plugins/darwin_amd64/terraform-provider-eksctl.

You can also install the provider globally under ${HOME}/.terraform.d/plugins/${OS}_${ARCH}, so that it is available from all the tf workspaces.

For Terraform 0.13 and later:

The provider is available at Terraform Registry so you can just add the following to your tf file for installation:

terraform {
  required_providers {
    eksctl = {
      source = "mumoshu/eksctl"
      version = "VERSION"
    }
  }
}

Please replace VERSION with the version number of the provider without the v prefix, like 0.3.14.

Usage

There is nothing to configure for the provider, so you firstly declare the provider like:

provider "eksctl" {}

You use eksctl_cluster and eksctl_cluster_deployment resources to CRUD your clusters from Terraform.

Usually, the former is what you want. It just runs eksctl to manage the cluster as exactly as you have declared in your tf file.

The latter is, as its name says, for managing a set of eksctl clusters in opinionated way.

On terraform apply:

  • For eksctl_cluster, the provider runs a series of eksctl update [RESOURCE]. It uses eksctl delete nodegroup --drain for deleting nodegroups for high availability.
  • For eksctl_cluster_deployment, the provider runs eksctl create abd a series of eksctl update [RESOURCE] and eksctl delete depending on the situation. It uses eksctl delete nodegroup --drain for deleting nodegroups for high availability.

On terraform destroy, the provider runs eksctl delete

The computed field output is used to surface the output from eksctl. You can use in the string interpolation to produce a useful Terraform output.

Declaring eksctl_cluster resource

It's almost like writing and embedding eksctl "cluster.yaml" into spec attribute of the Terraform resource definition block, except that some attributes like cluster name and region has dedicated HCL attributes.

Depending on the scenario, there are a few patterns in how you'd declare a eksctl_cluster resource.

  • Ephemeral cluster (Don't reuse VPC, subnets, or anything)
  • Reuse VPC
  • Reuse VPC and subnets
  • Reuse VPC, subnets, and ALBs

In general, for any non-ephemeral cluster you must set up the following pre-requisites:

  • VPC
  • Public/Private subnets
  • ALB and listener(s) (Only when you use blue-green cluster deployment)

Ephemeral cluster

When you let eksctl manage every AWS resource for the cluster, your resource should look like the below:

provider "eksctl" {}

resource "eksctl_cluster" "primary" {
  eksctl_bin = "eksctl-0.20.0"
  name = "primary1"
  region = "us-east-2"
  spec = <<-EOS
  nodeGroups:
  - name: ng1
    instanceType: m5.large
    desiredCapacity: 1
  EOS
}

Reuse VPC

Assuming you've already created a VPC with ID vpc-09c6c9f579baef3ea, your resource should look like the below:

provider "eksctl" {}

resource "eksctl_cluster" "vpcreuse1" {
  eksctl_bin = "eksctl-0.20.0"
  name = "vpcreuse1"
  region = "us-east-2"
  vpc_id = "vpc-09c6c9f579baef3ea"
  spec = <<-EOS
  nodeGroups:
  - name: ng1
    instanceType: m5.large
    desiredCapacity: 1
  EOS
}

Reuse VPC and subnets

Assuming you've already created a VPC with ID vpc-09c6c9f579baef3ea and a private subnet "subnet-1234", a public subnet "subnet-2345", your resource should look like the below:

provider "eksctl" {}

resource "eksctl_cluster" "vpcreuse1" {
  eksctl_bin = "eksctl-0.20.0"
  name = "vpcreuse1"
  region = "us-east-2"
  vpc_id = "vpc-09c6c9f579baef3ea"
  spec = <<-EOS
  vpc:
    cidr: "192.168.0.0/16"       # (optional, must match CIDR used by the given VPC)
    subnets:
      # must provide 'private' and/or 'public' subnets by availability zone as shown
      private:
        us-east-2a:
          id: "subnet-1234"
          cidr: "192.168.160.0/19" # (optional, must match CIDR used by the given subnet)
      public:
        us-east-2a:
          id: "subnet-2345"
          cidr: "192.168.64.0/19" # (optional, must match CIDR used by the given subnet)

  nodeGroups:
    - name: ng1
      instanceType: m5.large
      desiredCapacity: 1
  EOS
}

Reuse VPC, subnets, and ALBs

In a production setup, the VPC, subnets, ALB, and listeners should be re-used across revisions of the cluster, so that you can let the provider to switch the cluster revisions in a blue-gree/canary deployment manner.

Assuming you've used the terraform-aws-vpc module for setting up VPC and subnets, a eksctl_cluster resource should usually look like the below:

resource "eksctl_cluster" "primary" {
  eksctl_bin = "eksctl-dev"
  name = "existingvpc2"
  region = "us-east-2"
  api_version = "eksctl.io/v1alpha5"
  version = "1.16"
  vpc_id = module.vpc.vpc_id
  revision = 1
  spec = <<-EOS
  nodeGroups:
  - name: ng2
    instanceType: m5.large
    desiredCapacity: 1
    securityGroups:
      attachIDs:
      - ${aws_security_group.public_alb_private_backend.id}

  iam:
    withOIDC: true
    serviceAccounts: []

  vpc:
    cidr: "${module.vpc.vpc_cidr_block}"       # (optional, must match CIDR used by the given VPC)
    subnets:
      # must provide 'private' and/or 'public' subnets by availability zone as shown
      private:
        ${module.vpc.azs[0]}:
          id: "${module.vpc.private_subnets[0]}"
          cidr: "${module.vpc.private_subnets_cidr_blocks[0]}" # (optional, must match CIDR used by the given subnet)
        ${module.vpc.azs[1]}:
          id: "${module.vpc.private_subnets[1]}"
          cidr: "${module.vpc.private_subnets_cidr_blocks[1]}"  # (optional, must match CIDR used by the given subnet)
        ${module.vpc.azs[2]}:
          id: "${module.vpc.private_subnets[2]}"
          cidr: "${module.vpc.private_subnets_cidr_blocks[2]}"   # (optional, must match CIDR used by the given subnet)
      public:
        ${module.vpc.azs[0]}:
          id: "${module.vpc.public_subnets[0]}"
          cidr: "${module.vpc.public_subnets_cidr_blocks[0]}" # (optional, must match CIDR used by the given subnet)
        ${module.vpc.azs[1]}:
          id: "${module.vpc.public_subnets[1]}"
          cidr: "${module.vpc.public_subnets_cidr_blocks[1]}"  # (optional, must match CIDR used by the given subnet)
        ${module.vpc.azs[2]}:
          id: "${module.vpc.public_subnets[2]}"
          cidr: "${module.vpc.public_subnets_cidr_blocks[2]}"   # (optional, must match CIDR used by the given subnet)
  EOS
}

Drain NodeGroups

You can use drain_node_groups to declare which nodegroup(s) to be drained with eksctl drain nodegroup.

provider "eksctl" {}

resource "eksctl_cluster" "vpcreuse1" {
  eksctl_bin = "eksctl-0.20.0"
  name   = "vpcreuse1"
  spec   = <<-EOS
  vpc:
    subnets:
      private:
        us-east-2a: { id: "${local.subnet_private_ids[0]}" }
        us-east-2b: { id: "${local.subnet_private_ids[1]}" }
        us-east-2c: { id: "${local.subnet_private_ids[2]}" }
      public:
        us-east-2a: { id: "${local.subnet_public_ids[0]}" }
        us-east-2b: { id: "${local.subnet_public_ids[1]}" }
        us-east-2c: { id: "${local.subnet_public_ids[2]}" }

  iam:
    withOIDC: true
    serviceAccounts: []

  nodeGroups:
    - name: ng1
      instanceType: t2.small
      desiredCapacity: 1
    - name: ng2
      instanceType: t2.small
      desiredCapacity: 1
  EOS

  drain_node_groups = {
    ng1 = true,
    ng2 = false,
  }
> kubectl get no
NAME                                      STATUS                     ROLES    AGE    VERSION
ip-10-0-4-28.us-east-2.compute.internal   Ready,SchedulingDisabled   <none>   4d1h   v1.16.13-eks-ec92d4
ip-10-0-5-72.us-east-2.compute.internal   Ready                      <none>   4d1h   v1.16.13-eks-ec92d4

Add aws-auth ConfigMap

You can use iam_identity_mapping to grant additional AWS users or roles to operate the EKS cluster by letting the provider to update the aws-auth ConfigMap.

To get started, add one or more iam_identity_mapping block(s) like in the below example:

provider "eksctl" {}

locals {
  iams = [
    {
      iamarn   = "arn:aws:iam::123456789012:role/master-eks-role"
      username = "master-eks-role"
      groups = [
        "system:masters"
      ]
    },
    {
      iamarn   = "arn:aws:iam::123456789012:user/user-admin"
      username = "user-admin"
      groups = [
        "system:masters"
      ]
    },
  ]
}

resource "eksctl_cluster" "myeks" {
  name   = "myeks"
  region = "us-east-1"
  spec   = <<-EOS
  iam:
    withOIDC: true
    serviceAccounts: []
  nodeGroups:
    - name: ng1
      instanceType: t2.small
      desiredCapacity: 1
    - name: ng2
      instanceType: t2.small
      desiredCapacity: 1
  EOS


  dynamic "iam_identity_mapping" {
    for_each = local.iams
    content {
      iamarn   = iam_identity_mapping.value["iamarn"]
      username = iam_identity_mapping.value["username"]
      groups   = iam_identity_mapping.value["groups"]
    }
  }
}

output aws_auth {
  value = eksctl_cluster.myeks.aws_auth_configmap
}

On each terraform apply, the provider compares the current aws-auth configmap against the desired configmap contents, and run eksctl create iamidentitymapping to create additional mappings and eksctl delete iamidentitymapping to delete redundant mappings.

You can confirm the result by running eksctl get iamidentitymapping:

$ eksctl get iamidentitymapping -c myeks -o yaml
- groups:
  - system:bootstrappers
  - system:nodes
  rolearn: arn:aws:iam::123456789012:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-14SXZWF9IGX6O
  username: system:node:{{EC2PrivateDNSName}}
- groups:
  - system:bootstrappers
  - system:nodes
  rolearn: arn:aws:iam::123456789012:role/eksctl-myeks-nodegroup-ng2-NodeInstanceRole-2IGYK2W51ZHJ
  username: system:node:{{EC2PrivateDNSName}}
- groups:
  - system:masters
  rolearn: arn:aws:iam::123456789012:role/admin-role
  username: admin-role
- groups:
  - system:masters
  userarn: arn:aws:iam::123456789012:user/user-admin
  username: user-admin

Advanced Features and Use-cases

There's a bunch more settings that helps the app to stay highly available while being recreated, including:

  • kubernetes_resource_deletion_before_destroy
  • alb_attachment
  • pods_readiness_check
  • Cluster canary deployment

It's also highly recommended to include git configuration and use eksctl which includes eksctl-io/eksctl#2274 in order to install Flux in an unattended way, so that the cluster has everything deployed on launch. Otherwise blue-green deployments of the cluster doesn't make sense.

Please see the existingvpc example to see how a fully configured eksctl_cluster resource should look like, and the below references for details of each setting.

Delete Kubernetes resources before destroy

This option is available only within eksctl_cluster_deployment resource

Use kubernetes_resource_deletion_before_destroy blocks.

It is useful for e.g.:

  • Stopping Flux so that it won't try to install new manifests to fail while the cluster is being terminated
  • Stopping pods whose IP addresses are exposed via a headless service and external-dns before the cluster being down, so that stale pod IPs won't remain in the serviced discovery system
resource "eksctl_cluster_deployment" "primary" {
  name = "primary"
  region = "us-east-2"

  spec = <<-EOS
  nodeGroups:
  - name: ng2
    instanceType: m5.large
    desiredCapacity: 1
  EOS

  kubernetes_resource_deletion_before_destroy {
    namespace = "flux"
    kind = "deployment"
    name = "flux"
  }
}

Cluster canary deployment

Cluster canary deployment using ALB

courier_alb resource is used to declaratively and gradually shift traffic among given target groups.

In combination with standard alb_lb_* resources and two eksctl_cluster, you can conduct a "canary deployment" of the cluster.

This resource is useful but may be extracted out of this provider in the future.

A courier_alb looks like the below:

resource "eksctl_courier_alb" "my_alb_courier" {
  listener_arn = "<alb listener arn>"

  priority = "10"

  destination {
    target_group_arn = "<target group arn current>"

    weight = 0
  }

  destination {
    target_group_arn = "<target group arn next>"
    weight = 100
  }

  cloudwatch_metric {
    name = "http_errors_cw"

    # it will query from <now - 60 sec> to now, every 60 sec
    interval = "1m"

    max = 50

    query = "<QUERY>"
  }

  datadog_metric {
    name = "http_errors_dd"

    # it will query from <now - 60 sec> to now, every 60 sec
    interval = "1m"

    max = 50

    query = "<QUERY>"
  }
}

Let's say you want to serve your web service on port 80 of your internet-facing ALB. You'll start with a alb, alb_listener, and two alb_target_groups and two eksctl-cluster.

The below is the initial deployment with two clusters blue and green, where the traffic is 100% forwarded to blue and helmfile is used to deploy Helm charts to blue:

resource "aws_alb" "alb" {
  name = "alb"
  security_groups = [
    aws_security_group.public_alb.id
  ]
  subnets = module.vpc.public_subnets
  internal = false
  enable_deletion_protection = false
}

resource "aws_alb_listener" "mysvc" {
  port = 80
  protocol = "HTTP"
  load_balancer_arn = aws_alb.alb.arn
  default_action {
    type = "fixed-response"
    fixed_response {
      content_type = "text/plain"
      status_code = "404"
      message_body = "Nothing here"
    }
  }
}

resource "aws_lb_target_group" "blue" {
  name = "tg1"
  port = 30080
  protocol = "HTTP"
  vpc_id = module.vpc.vpc_id
}

resource "aws_lb_target_group" "green" {
  name = "tg2"
  port = 30080
  protocol = "HTTP"
  vpc_id = module.vpc.vpc_id
}

resource "eksctl_cluster" "blue" {
  name = "blue"
  region = "us-east-2"
  api_version = "eksctl.io/v1alpha5"
  version = "1.15"
  vpc_id = module.vpc.vpc_id
  spec = <<-EOS
  nodeGroups:
  - name: ng2
    instanceType: m5.large
    desiredCapacity: 1
    targetGroupARNs:
    - ${aws_lb_target_group.blue.arn}
  EOS
}

resource "eksctl_cluster" "green" {
  name = "green"
  region = "us-east-2"
  api_version = "eksctl.io/v1alpha5"
  version = "1.16"
  vpc_id = module.vpc.vpc_id
  spec = <<-EOS
  nodeGroups:
  - name: ng2
    instanceType: m5.large
    desiredCapacity: 1
    targetGroupARNs:
    - ${aws_lb_target_group.green.arn}
  EOS
}

resource "helmfile_release_set" "myapps" {
  content = file("./helmfile.yaml")
  environment = "default"
  kubeconfig = eksctl_cluster.blue.kubeconfig_path
  depends_on = [
    eksctl_cluster.blue
  ]
}

resource "eksctl_courier_alb" "my_alb_courier" {
  listener_arn = aws_alb_listener.mysvc.arn

  priority = "11"

  step_weight = 5
  step_interval = "1m"

  destination {
    target_group_arn = aws_lb_target_group.blue.arn

    weight = 100
  }

  destination {
    target_group_arn = aws_lb_target_group.green.arn
    weight = 0
  }

  depends_on = [
    helmfile_release_set.myapps
  ]
}

Wanna make a critical change to blue, without fearing downtime?

Rethink and update green instead, while changing courier_alb's weight so that the traffic is forwarded to green only after the cluster is successfully updated:

resource "helmfile_release_set" "myapps" {
  content = file("./helmfile.yaml")
  environment = "default"
    # It was `eksctl_cluster.blue.kubeconfig_path` before
  kubeconfig = eksctl_cluster.green.kubeconfig_path
  depends_on = [
    # This was eksctl_cluster.blue before the update
    eksctl_cluster.green
  ]
}

resource "eksctl_courier_alb" "my_alb_courier" {
  listener_arn = aws_alb_listener.mysvc.arn

  priority = "11"

  step_weight = 5
  step_interval = "1m"

  destination {
    target_group_arn = aws_lb_target_group.blue.arn
    # This was 100 before the update
    weight = 0
  }

  destination {
    target_group_arn = aws_lb_target_group.green.arn
    # This was 0 before the update
    weight = 100
  }

  depends_on = [
    helmfile_release_set.myapps
  ]
}

This instructs Terraform to:

  • Update eksctl_cluster.green
  • Run helmfile against the green cluster to have all the Helm charts deployed
  • Gradually shift the traffic from the previous blue cluster to the updated green cluster.

In addition, you can add cloudwatch_metrics and/or datadog_metrics to courier_alb's destinations, so that the provider runs canary analysis to determine whether it should continue shifting the traffic.

Cluster canary deployment using Route 53 and NLB

courier_route53_record resource is used to declaratively and gradually shift traffic behind a Route 53 record backed by ELBs. It uses Route 53's "Weighted routing" behind the scene.

In combination with standard alb_lbs and two eksctl_cluster, you can conduct a "canary deployment" of the cluster.

This resource may be extracted out of this provider in the future.

First of all, you need two sets of a Route53 record and a LB(NLB, ALB, or CLB), each named blue and green:

resource "aws_route53_record" "blue" {
  zone_id = aws_route53_zone.primary.zone_id
  name    = "www.example.com"
  type    = "A"
  ttl     = "5"

  weighted_routing_policy {
    weight = 1
  }

  set_identifier = "blue"

  alias {
    name                   = aws_lb.blue.dns_name
    zone_id                = aws_lb.blue.zone_id
    evaluate_target_health = true
  }

  lifecycle {
    ignore_changes = [
      weighted_routing_policy,
    ]
  }
}

resource "aws_route53_record" "green" {
  zone_id = aws_route53_zone.primary.zone_id
  name    = "www.example.com"
  type    = "A"
  ttl     = "5"

  weighted_routing_policy {
    weight = 0
  }

  set_identifier = "green"

  alias {
    name                   = aws_lb.green.dns_name
    zone_id                = aws_lb.green.zone_id
    evaluate_target_health = true
  }

  lifecycle {
    ignore_changes = [
      weighted_routing_policy,
    ]
  }
}

Let's start by forwarding 100% traffic to blue by creating a courier_route53_record that looks like the below:

resource "eksctl_courier_route53_record" "www" {
  zone_id = aws_route53_zone.primary.zone_id
  name    = "www.example.com"

  step_weight = 5
  step_interval = "1m"

  destination {
    set_identifier = "blue"

    weight = 100
  }

  destination {
    set_identifier = "green"

    weight = 0
  }

  depends_on = [
    helmfile_release_set.myapps
  ]
}

Wanna make a critical change to blue, without fearing downtime?

Rethink and update green instead, while changing courier_route53_record's weight so that the traffic is forwarded to green only after the cluster is successfully updated:

resource "helmfile_release_set" "myapps" {
  content = file("./helmfile.yaml")
  environment = "default"
  # It was `eksctl_cluster.blue.kubeconfig_path` before
  kubeconfig = eksctl_cluster.green.kubeconfig_path
  depends_on = [
    # This was eksctl_cluster.blue before the update
    eksctl_cluster.green
  ]
}

resource "eksctl_courier_route53_record" "www" {
  zone_id = aws_route53_zone.primary.zone_id
  name    = "www.example.com"

  step_weight = 5
  step_interval = "1m"

  destination {
    set_identifier = "blue"
    # This was 100 before the update
    weight = 0
  }

  destination {
    set_identifier = "green"
    # This was 0 before the update
    weight = 100
  }

  depends_on = [
    helmfile_release_set.myapps
  ]
}

Advanced Features

Declarative binary version management

terraform-provider-eksctl has a built-in package manager called shoal. With that, you can specify the following eksctl_cluster attributes to let the provider install the executable binaries on demand:

  • eksctl_version for installing eksctl

eksctl_version uses the Go runtime and go-git so it should work without any dependency.

With the below example, the provider installs eksctl v0.27.0, so that you don't need to install it beforehand. This should be handy when you're trying to use this provider on Terraform Cloud, whose runtime environment is not available for customization by the user.

resource "eksctl_cluster" "mystack" {
  eksctl_version = "0.27.0"

  // snip

Add and remove Node Groups

In addition to declaring nodegroups in eksctl_cluster's spec, you can add one or more nodegroups by using eksctl_nodegroup:

resource "eksctl_cluster" "red" {
  name = "red1"
  region = "us-east-2"
  api_version = "eksctl.io/v1alpha5"
  version = "1.16"
  vpc_id = module.vpc.vpc_id
  spec = <<-EOS
  nodeGroups:
  - name: ng1
    instanceType: m5.large
    desiredCapacity: 1
    targetGroupARNs:
    - ${aws_lb_target_group.green.arn}
  EOS
}

resource "eksctl_nodegroup" "ng2" {
  assume_role {
    role_arn = var.role_arn
  }
  name = "ng1"
  region = eksctl_cluster.red.region
  cluster = eksctl_cluster.red.name
  nodes_min = 1
  nodes = 1
  # And all the `eksctl-create-nodegroup` flags are available as their `snake_case` form.
  # See `eksctl create nodegroup -h` and
  # https://github.com/mumoshu/terraform-provider-eksctl/pull/34/files#diff-d490f9a73df8d38ad25b7d26bf1152d178c08df0980f55b3c86fc6991b2b9839R165-R202
  # for the full list.
  # For example, `--install-nvidia-plugin` can be spciefied as `install_nvidia_driver = true`.
}

It's almost a matter of preference whether to use, but generally eksctl_nodegroup is faster to apply as it involves fewer AWS API calls.

AssumeRole and Cross Account

Providing the assume_role block, you can let the provider to call sts:AssumeRole for assuming an AWS role in the same account or another account before calling AWS API and running eksctl or kubectl.

resource "eksctl_cluster" "red" {
  assume_role {
    role_arn = "arn:aws:iam::${var.account_id}:role/${var.role_name}"
  }
  // snip

The Goal

My goal for this project is to allow automated canary deployment of a whole K8s cluster via single terraform apply run.

That would require a few additional features to this provider, including:

  • Ability to attach eks_cluster to ALB
  • Analyze ALB metrics (like 2xx and 5xx count per targetgroups) so that we can postpone terraform apply before trying to roll out a broken cluster
  • Analyze important pods readiness before rolling out a cluster
    • Implemented. Use pods_readiness_check blocks.
  • Analyze Datadog metrics (like request success/error rate, background job success/error rate, etc.) before rolling out a new cluster.
  • Specify default K8s resource manifests to be applied on the cluster
    • The new kubernetes provider doesn't help it. What we need is ability to apply manifests after the cluster creation but before completing update on the eks_cluster resource. With the kubernetes provider, the manifests are applied AFTER the eksctl_cluster update is done, which isn't what we want.
    • Implemented. Use the manifests attribute.
  • Ability to attach eks_cluster to NLB

terraform-provider-eksctl is my alternative to the imaginary eksctl-controller.

I have been long considered about developing a K8s controller that allows you to manage eksctl cluster updates fully declaratively via a K8s CRD. The biggest pain point of that model is you still need a multi-cluster control-plane i.e. a "management" K8s cluster, which adds additional operational/maintenance cost for us.

If I implement the required functionality to a terraform provider, we don't need an additional K8s cluster for management, as the state is already stored in the terraform state and the automation is already done with Atlantis, Terraform Enterprise, or any CI systems like CircleCI, GitHub Actions, etc.

As of today, the API is mostly there, but the implementation of the functionality is still TODO.

Developing

If you wish to build this yourself, follow the instructions:

$ cd terraform-provider-eksctl
$ go build

There's also a convenient Make target for installing the provider into the global tf providers directory:

$ make install

The above will install the provider's binary under ${HOME}/.terraform.d/plugins/${OS}_${ARCH}.

If you're using Terraform v0.13+, you need to tweak your .tf file to give a dummy version number to the provider while placing the binary to the corresponding location.

Let's say you use 0.0.1 as the dummy version number:

terraform {
  required_providers {
    eksctl = {
      source = "mumoshu/eksctl"
      version = "0.0.1"
    }

    helmfile = {
      source = "mumoshu/helmfile"
      version = "0.0.1"
    }
  }
}

You place the binary under:

VER=0.0.1

$(PWD)/.terraform/plugins/registry.terraform.io/mumoshu/eksctl/$(VER)/darwin_amd64/terraform-provider-eksctl_v$(VER)

Acknowledgement

The implementation of this product is highly inspired from terraform-provider-shell. A lot of thanks to the author!

More Repositories

1

kube-airflow

A docker image and kubernetes config files to run Airflow on Kubernetes
Python
649
star
2

aws-secret-operator

A Kubernetes operator that automatically creates and updates Kubernetes secrets according to what are stored in AWS Secrets Manager.
Go
309
star
3

variant

Wrap up your bash scripts into a modern CLI today. Graduate to a full-blown golang app tomorrow.
Go
301
star
4

helm-x

Treat any Kustomization or K8s manifests directory as a Helm chart
Go
169
star
5

play2-memcached

A memcached plugin for Play 2.x
Scala
162
star
6

variant2

Turn your bash scripts into a modern, single-executable CLI app today
Go
134
star
7

terraform-provider-helmfile

Deploy Helmfile releases from Terraform
Go
115
star
8

kube-ssm-agent

Secure, access-controlled, and audited terminal sessions to EKS nodes without SSH
Dockerfile
106
star
9

kube-spot-termination-notice-handler

Moved to https://github.com/kube-aws/kube-spot-termination-notice-handler
Shell
97
star
10

concourse-aws

Auto-scaling Concourse CI v2.2.1 on AWS with Terraform (since 2016.04.14)
Go
72
star
11

play2-typescript

TypeScript assets handling for Play 2.0. Compiles .ts files under the /assets dir along with the project.
Scala
72
star
12

crossover

Minimal sufficient Envoy xDS for Kubernetes that knows https://smi-spec.io/
Go
71
star
13

helmfile-operator

Kubernetes operator that continuously syncs any set of Chart/Kustomize/Manifest fetched from S3/Git/GCS to your cluster
Go
62
star
14

okra

Hot-swap Kubernetes clusters while keeping your service up and running.
Go
48
star
15

decouple-apps-and-eks-clusters-with-tf-and-gitops

Smarty
31
star
16

kodedeploy

CodeDeploy for EKS. Parallel and Continuous (re)delivery to multiple EKS clusters. Blue-green deployment "of" EKS clusters.
Shell
29
star
17

config-registry

Switch between kubeconfigs and avoid unintentional operation on your production clusters.
Go
28
star
18

gosh

(Re)write your bash script in Go and make it testable too
Go
24
star
19

tiny-mmo

A work-in-progress example of server-side implementation of a tiny massive multi-player online game, implemented with Scala 2.10.2 and Akka 2.2
Java
23
star
20

geohash-scala

Geohash library for scala
Scala
23
star
21

prometheus-process-exporter

Helml chart for Prometheus process-exporter
Mustache
22
star
22

waypoint-plugin-helmfile

Helmfile deployment plugin for HashiCorp Waypoint
Go
15
star
23

kube-fluentd

An ubuntu-slim/s6-overlay/confd based docker image for running fluentd in Kubernetes pods/daemonsets
Makefile
15
star
24

kube-magic-ip-address

daemonset that assigns a magic IP address to pods. Useful for implementing node-local services on Kubernetes. Use in combination with dd-agent, dd-zipkin-proxy, kube2iam, kiam, and Elastic's apm-server
Shell
15
star
25

node-detacher

Practically and gracefully stop your K8s node on (termination|scale down|maintenance)
Go
13
star
26

kube-node-init

Kubernetes daemonset for node initial configuration. Currently for modifying files and systemd services on eksctl nodes without changing userdata
Smarty
13
star
27

lambda_bot

JavaScript
12
star
28

argocd-clusterset

A command-line tool and Kubernetes controller to sync EKS clusters into ArgoCD cluster secrets
Go
12
star
29

dcind

Docker Compose (and Daemon) in Docker
Shell
12
star
30

flux-repo

Enterprise-grade secrets management for GitOps
Go
10
star
31

brigade-helmfile-demo

Demo for building an enhanced GitOps pipeline with Flux, Brigade and Helmfile
JavaScript
9
star
32

kubeherd

A opinionated toolkit to automate herding your cattles = ephemeral Kubernetes clusters
Shell
9
star
33

sopsed

Spawning and storage of secure environments powered by sops, inspired from vaulted. Out-of-box support for kubectl, kube-aws, helm, helmfile
Go
9
star
34

helmfile-gitops

8
star
35

nodelocaldns

Temporary location of the nodelocaldns chart until it is upstreamed to helm/charts
Smarty
8
star
36

kube-logrotate

An ubuntu-slim/s6-overlay/confd based docker image for running logrotate via Kubernetes daemonsets
Makefile
7
star
37

idea-play

A plugin for IntelliJ IDEA which supports development of Play framework applications.
Java
7
star
38

actions-runner-controller-ci

Test repo for https://github.com/summerwind/actions-runner-controller
Go
6
star
39

kargo

Deploy your Kubernetes application directly or indirectly
Go
6
star
40

shoal

Declarative, Go-embeddable, and cross-platform package manager powered by https://gofi.sh/
Go
6
star
41

ingress-daemonset-controller

Build your own highly available and efficient external ingress loadbalancer on Kubernetes
Go
6
star
42

conflint

Unified lint runners for various configuration files
Go
5
star
43

fluent-plugin-datadog-log

Sends logs to Datadog Log Management: A port/translation of https://github.com/DataDog/datadog-log-agent to a Fluentd plugin
Ruby
5
star
44

play20-zentasks-tests

Added spec2 tests to Play20's 'zentasks
Scala
4
star
45

versapack

Versatile package manager. Version and host anything in Helm charts repositories
4
star
46

nomidroid

Android app to find nearby restaurants, bars, etc with Recruit Web API
Java
4
star
47

vote4music-play20

A Play 2.0 RC1 port of https://github.com/loicdescotte/vote4music
Scala
4
star
48

homebrew-anyenv

Ruby
4
star
49

terraform-provider-kubectl

Run kubectl against multiple K8s clusters in a single terraform-apply. AWS auth/AssumeRole supported.
Go
3
star
50

kube-node-index-prioritizing-scheduler

Kubernetes scheduler extender that uses sorted nodes' positions as priorities. Use in combination with resource request/limit to implement low-latency highly available front proxy cluster
Go
3
star
51

depot

Ruby
2
star
52

kokodoko

JavaScript
2
star
53

brigade-cd

Go
2
star
54

scripts

my scripts for linux machines
Shell
2
star
55

courier

Go
2
star
56

nodejs-chat

JavaScript
2
star
57

ooina

JavaScript
2
star
58

play-websocket-growl-sample

A sample application notifies the server on Growl, and other users via WebSocket
Java
2
star
59

heartbeat-operator

Monitor any http and tcp services via Kubernetes custom resources
Shell
2
star
60

brigade-automation

TypeScript
2
star
61

nbxmpfilter

xmpfilter for NetBeans
2
star
62

wy

Go
2
star
63

play-scala-lobby-example

Java/Scala mixed app with a WebSocketController in Java, while others are in Scala.
JavaScript
2
star
64

docker-squid-gem-proxy

Dockerfile for building docker image to run Squid-based reverse proxies to rubygems.org, for speeding-up `gem install` or `bundle install`
Shell
2
star
65

hcl2-yaml

YAML syntax for HashiCorp Configuration Language Version 2
Go
2
star
66

tweet_alarm_clock_android

Java
2
star
67

testkit

A toolkit for writing Go tests for your cloud apps, infra-as-code, and gitops configs
Go
1
star
68

elasticdrone

A set of tools to configure auto-scaled Drone.io cluster on AWS in ease
JavaScript
1
star
69

kubeaws-cicd-pipeline-golang

An example of the opinionated deployment pipeline for your Kubernetes-on-AWS-native apps. Secure, declarative, customizable, open app deployments at your hand.
Shell
1
star
70

akka-parallel-tasks

Scala
1
star
71

sing

1
star
72

good-code-is-red-example-for-idea-scala-plugin

Scala
1
star
73

share-snippets

JavaScript
1
star
74

scala-scalable-programming-sidenote

Scala
1
star
75

teleport-on-minikube

Shell
1
star
76

genrun

Go
1
star
77

nomidroid_test

tests for nomidroid the android app.
Java
1
star
78

web_smartphonizer

Scala
1
star
79

idobata4s

The Scala client library for idobata.io API
Scala
1
star
80

specs2-sandbox

Scala
1
star
81

kube-distributed-test-runner

Go
1
star
82

syaml

Set YAML values with `cat your.yaml | syaml a.b.c newValue`
Go
1
star
83

delimited-continuation

An example usage of Delimited Continuation in Scala,;implementing an async HTTP client.
Scala
1
star
84

play2-unit-tests-sample

1
star
85

javascript-missiles-and-lasers

JavaScript
1
star
86

scala-debian-package

1
star
87

glabel

A GOverlay implementation for putting text labels on GMap2
JavaScript
1
star
88

gitops-demo

Makefile
1
star
89

division

Control-plane of your microservices and Kubernetes clusters
Go
1
star
90

mumoauth

Yet another implementation of OAuth aiming to cover both OAuth 1.0a and 2, from Client to Server.
Scala
1
star
91

sam-containerized-go-example

Go
1
star
92

kube-veneur

Sidecar container to run local veneur and Kubernetes service for global https://github.com/stripe/veneur. Uses confd and s6-overlay under the hood.
Makefile
1
star
93

tweet_alarm_clock_android_test

Java
1
star
94

javascript-hue_circle

Draw a hue circle in JavaScript
JavaScript
1
star
95

thread_local

An implementation of the thread-local variable for Ruby
Ruby
1
star
96

play-multijpa

a Play! plugin to switch databases for each play.db.jpa.Model subclass
Java
1
star
97

argocd-example-apps

Jsonnet
1
star
98

logrus-bunyan-formatter

Logrus Bunyan Log Formatter
Go
1
star
99

crdb

Custom Resource DB - Kubernetes custom resources without Kubernetes or self-hosted databases. For any cloud, by leveraging cloudprovider-specific managed databases like AWS DynamoDB
Go
1
star