• Stars
    star
    375
  • Rank 114,096 (Top 3 %)
  • Language
    Go
  • License
    MIT License
  • Created almost 8 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Configures AWS Load Balancers according to Kubernetes Ingress resources

Kubernetes Ingress Controller for AWS

This is an ingress controller for Kubernetes β€” the open-source container deployment, scaling, and management system β€” on AWS. It runs inside a Kubernetes cluster to monitor changes to your ingress resources and orchestrate AWS Load Balancers accordingly.

Build Status Coverage Status GitHub release go-doc

This ingress controller uses the EC2 instance metadata of the worker node where it's currently running to find the additional details about the cluster provisioned by Kubernetes on top of AWS. This information is used to manage AWS resources for each ingress objects of the cluster.

Features

  • Uses CloudFormation to guarantee consistent state
  • Automatic discovery of SSL certificates
  • Automatic forwarding of requests to all Worker Nodes, even with auto scaling
  • Automatic cleanup of unnecessary managed resources
  • Support for both Application Load Balancers and Network Load Balancers.
  • Support for internet-facing and internal load balancers
  • Support for ignoring cluster-internal ingress, that only have --cluster-local-domain=cluster.local domains.
  • Support for denying traffic for internal domains.
  • Support for multiple Auto Scaling Groups
  • Support for instances that are not part of Auto Scaling Group
  • Support for SSLPolicy, set default and per ingress
  • Support for CloudWatch Alarm configuration
  • Can be used in clusters created by Kops, see our deployment guide for Kops
  • Support Multiple TLS Certificates per ALB (SNI).
  • Support for AWS WAF and WAFv2
  • Support for AWS CNI pod direct access
  • Support for Kubernetes CRD RouteGroup

Upgrade

<v0.14.0 to >=v0.14.0

Version v0.14.0 makes target-access-mode flag required to make upgrading users aware of the issue.

New deployment of the controller should use --target-access-mode=HostPort or --target-access-mode=AWSCNI.

To upgrade from <v0.12.17 use --target-access-mode=Legacy - it is the same as HostPort but does not set target type and relies on CloudFormation to use instance as a default value.

Note that changing later from --target-access-mode=Legacy will change target type in CloudFormation and trigger target group recreation and downtime.

To upgrade from >=v0.12.17 when --target-access-mode is not set use explicit --target-access-mode=HostPort.

<v0.13.0 to >=0.13.0

Version v0.13.0 use Ingress version v1 as default. You can downgrade ingress version to earlier versions via flag. You will also need to allow the access via RBAC, see more information in <v0.11.0 to >=0.11.0 below.

<v0.12.17 to <v0.14.0

Please see release note and issue this update can cause 30s downtime, if you don't use AWS CNI mode.

Please upgrade to >=v0.14.0.

<v0.12.0 to <=0.12.16

Version v0.12.0 changes Network Load Balancer type handling if Application Load Balancer type feature is requested. See Load Balancers types notes for details.

<v0.11.0 to >=0.11.0

Version v0.11.0 changes the default apiVersion used for fetching/updating ingresses from extensions/v1beta1 to networking.k8s.io/v1beta1. For this to work the controller needs to have permissions to list ingresses and update, patch ingresses/status from the networking.k8s.io apiGroup. See deployment example. To fallback to the old behavior you can set the apiVersion via the --ingress-api-version flag. Value must be extensions/v1beta1 or networking.k8s.io/v1beta1 (default) or networking.k8s.io/v1.

<v0.9.0 to >=v0.9.0

Version v0.9.0 changes the internal flag parsing library to kingpin this means flags are now defined with -- (two dashes) instead of a single dash. You need to change all the flags like this: -stack-termination-protection -> --stack-termination-protection before running v0.9.0 of the controller.

<v0.8.0 to >=v0.8.0

Version v0.8.0 added certificate verification check to automatically ignore self-signed and certificates from internal CAs. The IAM role used by the controller now needs the acm:GetCertificate permission. acm:DescribeCertificate permission is no longer needed and can be removed from the role.

<v0.7.0 to >=v0.7.0

Version v0.7.0 deletes the annotation zalando.org/aws-load-balancer-ssl-cert-domain, which we do not consider as feature since we have SNI enabled ALBs.

<v0.6.0 to >=v0.6.0

Version v0.6.0 introduced support for Multiple TLS Certificates per ALB (SNI). When upgrading your ALBs will automatically be aggregated to a single ALB with multiple certificates configured. It also adds support for attaching single EC2 instances and multiple AutoScalingGroups to the ALBs therefore you must ensure you have the correct instance filter defined before upgrading. The default filter is tag:kubernetes.io/cluster/<cluster-id>=owned tag-key=k8s.io/role/node see How it works for more information on how to configure this.

<v0.5.0 to >=v0.5.0

Version v0.5.0 introduced support for both internet-facing and internal load balancers. For this change we had to change the naming of the CloudFormation stacks created by the controller. To upgrade from v0.4.* to v0.5.0 no changes are needed, but since the naming change of the stacks migrating back down to a v0.4.* version will not be non-disruptive as it will be unable to manage the stacks with the new naming scheme. Deleting the stacks manually will allow for a working downgrade.

<v0.4.0 to >=v0.4.0

In versions before v0.4.0 we used AWS Tags that were set by CloudFormation automatically to find some AWS resources. This behavior has been changed to use custom non cloudformation tags.

In order to update to v0.4.0, you have to add the following tags to your AWs Loadbalancer SecurityGroup before updating:

  • kubernetes:application=kube-ingress-aws-controller
  • kubernetes.io/cluster/<cluster-id>=owned

Additionally you must ensure that the instance where the ingress-controller is running has the clusterID tag kubernetes.io/cluster/<cluster-id>=owned set (was ClusterID=<cluster-id> before v0.4.0).

Ingress annotations

Overview of configuration which can be set via Ingress annotations.

Annotations

Name Value Default
alb.ingress.kubernetes.io/ip-address-type ipv4 | dualstack ipv4
zalando.org/aws-load-balancer-ssl-cert string N/A
zalando.org/aws-load-balancer-scheme internal | internet-facing internet-facing
zalando.org/aws-load-balancer-shared true | false true
zalando.org/aws-load-balancer-security-group string N/A
zalando.org/aws-load-balancer-ssl-policy string ELBSecurityPolicy-2016-08
zalando.org/aws-load-balancer-type nlb | alb alb
zalando.org/aws-load-balancer-http2 true | false true
zalando.org/aws-waf-web-acl-id string N/A
kubernetes.io/ingress.class string N/A

The defaults can also be configured globally via a flag on the controller.

Load Balancers types

The controller supports both Application Load Balancers and Network Load Balancers. Below is an overview of which features can be used with the individual Load Balancer types.

Feature Application Load Balancer Network Load Balancer
HTTPS βœ”οΈ βœ”οΈ
HTTP βœ”οΈ βœ”οΈ --nlb-http-enabled
HTTP -> HTTPS redirect βœ”οΈ --redirect-http-to-https βœ–οΈ
Cross Zone Load Balancing βœ”οΈ (only option) βœ”οΈ --nlb-cross-zone
Dualstack support βœ”οΈ --ip-addr-type=dualstack βœ–οΈ
Idle Timeout βœ”οΈ --idle-connection-timeout βœ–οΈ
Custom Security Group βœ”οΈ βœ–οΈ
Web Application Firewall (WAF) βœ”οΈ βœ–οΈ
HTTP/2 Support βœ… (not relevant)

To facilitate default load balancer type switch from Application to Network when the default load balancer type is Network (--load-balancer-type="network") and Custom Security Group (zalando.org/aws-load-balancer-security-group) or Web Application Firewall (zalando.org/aws-waf-web-acl-id) annotation is present the controller configures Application Load Balancer. If zalando.org/aws-load-balancer-type: nlb annotation is also present then controller ignores the configuration and logs an error.

AWS Tags

SecurityGroup auto detection needs the following AWS Tags on the SecurityGroup:

  • kubernetes.io/cluster/<cluster-id>=owned
  • kubernetes:application=<controller-id>, controller-id defaults to kube-ingress-aws-controller and can be set by flag --controller-id=<my-ctrl-id>.

AutoScalingGroup auto detection needs the same AWS tags on the AutoScalingGroup as defined for the SecurityGroup.

In case you want to attach/detach single EC2 instances to the ALB TargetGroup, you have to have the same <cluster-id> set as on the running kube-ingress-aws-controller. Normally this would be kubernetes.io/cluster/<cluster-id>=owned.

Development Status

This controller is used in production since Q1 2017. It aims to be out-of-the-box useful for anyone running Kubernetes. Jump down to the Quickstart to try it outβ€”and please let us know if you have trouble getting it running by filing an Issue. If you created your cluster with Kops, see our deployment guide for Kops

As of this writing, it's being used in production use cases at Zalando, and can be considered battle-tested in this setup. We're actively seeking devs/teams/companies to try it out and share feedback so we can make improvements.

We are also eager to bring new contributors on board. See our contributor guidelines to get started, or claim a "Help Wanted" item.

Why We Created This Ingress Controller

The maintainers of this project are building an infrastructure that runs Kubernetes on top of AWS at large scale (for nearly 200 delivery teams), and with automation. As such, we're creating our own tooling to support this new infrastructure. We couldn't find an existing ingress controller that operates like this one does, so we created one ourselves.

We're using this ingress controller with Skipper, an HTTP router that Zalando has used in production since Q4 2015 as part of its front-end microservices architecture. Skipper's also open source and has some outstanding features, that we documented here. Feel free to use it, or use another ingress of your choosing.

How It Works

This controller continuously polls the API server to check for ingress resources. It runs an infinite loop. For each cycle it creates load balancers for new ingress resources, and deletes the load balancers for obsolete/removed ingress resources.

This is achieved using AWS CloudFormation. For more details check our CloudFormation Documentation

The controller will not manage the security groups required to allow access from the Internet to the load balancers. It assumes that their lifecycle is external to the controller itself.

During startup phase EC2 filters are constructed as follows:

  • If CUSTOM_FILTERS environment variable is set, it is used to generate filters that are later used to fetch instances from EC2.
  • If CUSTOM_FILTERS environment variable is not set or could not be parsed, then default filters are tag:kubernetes.io/cluster/<cluster-id>=owned tag-key=k8s.io/role/node where <cluster-id> is determined from EC2 tags of instance on which Ingress Controller pod is started.

CUSTOM_FILTERS is a list of filters separated by spaces. Each filter has a form of name=value where name can be a tag: or tag-key: prefixed expression, as would be recognized by the EC2 API, and value is value of a filter, or a comma seperated list of values.

For example:

  • tag-key=test will filter instances that have a tag named test, ignoring the value.
  • tag:foo=bar' will filter instances that have a tag named foo with the value bar
  • tag:abc=def,ghi will filter instances that have a tag named abc with the value def OR ghi
  • Default filter tag:kubernetes.io/cluster/<cluster-id>=owned tag-key=k8s.io/role/node filters instances that has tag kubernetes.io/cluster/<cluster-id> with value owned and have tag named tag-key=k8s.io/role/node.

Every poll cycle EC2 is queried with filters that were constructed during startup. Each new discovered instance is scanned for Auto Scaling Group tag. Each Target Group created by this Ingress controller is then added to each known Auto Scaling Group. Each Auto Scaling Group information is fetched only once when first node of it is discovered for first time. If instance does not belong to Auto Scaling Group (does not have aws:autoscaling:groupName tag) it is stored in separate list of Single Instances. On each cycle instances on this list are registered as targets in all Target Groups managed by this controller. If call to get instances from EC2 did not return previously known Single Instance, it is deregistered from Target Group and removed from list of Single Instances. Call to deregister instances is aggregated so that maximum 1 call to deregister is issued in poll cycle.

For Auto Scaling Groups, the controller will always try to build a list of owned Auto Scaling Groups based on the tag: kubernetes.io/cluster/<cluster-id>=owned even if this tag is not specified in the CUSTOM_FILTERS configuration. Tracking the owned Auto Scaling Groups is done to automatically deregister any ASGs which are no longer targeted by the CUSTOM_FILTERS.

Discovery

On startup, the controller discovers the AWS resources required for the controller operations:

  1. The Security Group

    Lookup of the kubernetes.io/cluster/<cluster-id> tag of the Security Group matching the clusterID for the controller node and kubernetes:application matching the value kube-ingress-aws-controller or as fallback for <v0.4.0 tag aws:cloudformation:logical-id matching the value IngressLoadBalancerSecurityGroup (only clusters created by CF).

  2. The Subnets

    Subnets are discovered based on the VPC of the instance where the controller is running. By default it will try to select all subnets of the VPC but will limit the subnets to one per Availability Zone. If there are many subnets within the VPC it's possible to tag the desired subnets with the tags kubernetes.io/role/elb (for internet-facing ALBs) or kubernetes.io/role/internal-elb (for internal ALBs). Subnets with these tags will be favored when selecting subnets for the ALBs. Additionally you can tag EC2 subnets with kubernetes.io/cluster/<cluster-id>, which will be prioritized. If there are two possible subnets for a single Availability Zone then the first subnet, lexicographically sorted by ID, will be selected.

Running outside of EC2

The controller can run outside of EC2. In this mode it can't dicover vpc-id and cluster-id which needs to be passed via flags on startup:

./kube-ingress-aws-controller \
  --cluster-id="<cluster-id>" \
  --vpc-id="<vpc-id>"

You can get the VPC ID by listing VPCs in your AWS account:

aws ec2 describe-vpcs
{
    "Vpcs": [
        {
            "CidrBlock": "172.31.0.0/16",
            "DhcpOptionsId": "....",
            "State": "available",
            "VpcId": "vpc-abcde",
            ...

Creating Load Balancers

When the controller learns about new ingress resources, it uses the hosts specified in it to automatically determine the most specific, valid certificates to use. The certificates has to be valid for at least 7 days. An example ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-app
spec:
  rules:
  - host: test-app.example.org
    http:
      paths:
      - backend:
          service:
            name: test-app-service
            port:
              name: main-port
        path: /
        pathType: ImplementationSpecific

The Application Load Balancer created by the controller will have both an HTTP listener and an HTTPS listener. The latter will use the automatically selected certificates.

By default the ingress-controller will aggregate all ingresses under as few Application Load Balancers as possible (unless running with --disable-sni-support). If you like to provision an Application Load Balancer that is unique for an ingress you can use the annotation zalando.org/aws-load-balancer-shared: "false".

The new Application Load Balancers have a custom tag marking them as managed load balancers to differentiate them from other load balancers. The tag looks like this:

`kubernetes:application` = `kube-ingress-aws-controller`

They also share the kubernetes.io/cluster/<cluster-id> tag with other resources from the cluster where it belongs.

Create a Load Balancer with a pinned certificate

As a second option you can specify the Amazon Resource Name (ARN) of the desired certificate with an annotation like the one shown here:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myingress
  annotations:
    zalando.org/aws-load-balancer-ssl-cert: arn:aws:acm:eu-central-1:123456789012:certificate/f4bd7ed6-bf23-11e6-8db1-ef7ba1500c61
spec:
  rules:
  - host: test-app.example.org
    http:
      paths:
      - backend:
          service:
            name: test-app-service
            port:
              name: main-port
        path: /
        pathType: ImplementationSpecific

Create an internal Load Balancer

You can select the Application Load Balancer Scheme with an annotation like the one shown here:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myingress
  annotations:
    zalando.org/aws-load-balancer-scheme: internal
spec:
  rules:
  - host: test-app.example.org
    http:
      paths:
      - backend:
          service:
            name: test-app-service
            port:
              name: main-port
        path: /
        pathType: ImplementationSpecific

You can only select from internet-facing (default) and internal options.

If you run the controller with --load-balancer-type=network and create an internal load balancer, the controller will create an Application Load Balancer instead of a Network Load Balancer, because it can create hard to debug issues, that we want to prevent as default. If you know what you are doing you can enforce to create a Network Load Balancer by setting annotation zalando.org/aws-load-balancer-type: nlb.

Omit to create a Load Balancer for cluster internal domains

Since >=v0.10.5, you can create Ingress objects with host rules, that have the .cluster.local and the controller will not create an ALB for this.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myingress
spec:
  rules:
  - host: test-app.skipper.cluster.local
    http:
      paths:
      - backend:
          service:
            name: test-app-service
            port:
              name: main-port
        path: /
        pathType: ImplementationSpecific

If you pass --cluster-local-domain=".cluster.local", you can change what domain is considered cluster internal. If you're using the deny internal traffic feature, you might want to sync this configuration with the --internal-domains one.

Deny traffic for internal domains

Since >=v0.11.18 the controller supports the flag --deny-internal-domains. It's a boolean config item that when enabled configures the ALBs' cloudformation templates with a AWS::ElasticLoadBalancingV2::ListenerRule resource. This rule will be configured with the condition values from the --internal-domains flag and the action fixedresponseconfig with the respective response --deny-internal-domains-response flags. This feature is not enabled by default. The following are the default values to its config flags:

  • internal-domains: *.cluster.local
  • deny-internal-domains: false (same as explicitly passing --no-deny-internal-domains)
  • deny-internal-domains-response: Unauthorized
  • deny-internal-domains-response-content-type: text/plain
  • deny-internal-domains-response-status-code: 401

Note that --internal-domains differs from --cluster-local-domain, which is used exclusively to avoid load balancers creation for the cluster internal domain. The --internal-domains flag can be set multiple times and accept AWS' wildcard characters. Check the AWS' docs on the Host Header config for more details.

This feature is not supported by NLBs.

Example:

Running the controller with --deny-internal-domains and --internal-domains=*.cluster.local will generate a rule in the ALB that matches any request to domains ending in .cluster.local and answer the request with an HTTP 401 Unauthorized.

Create Load Balancer with SSL Policy

You can select the default SSLPolicy, with the flag --ssl-policy=ELBSecurityPolicy-TLS-1-2-2017-01. This choice can be overriden by the Kubernetes Ingress annotation zalando.org/aws-load-balancer-ssl-policy to any valid value. Valid values will be checked by the controller.

Example:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myingress
  annotations:
    zalando.org/aws-load-balancer-ssl-policy: ELBSecurityPolicy-FS-2018-06
spec:
  rules:
  - host: test-app.example.org
    http:
      paths:
      - backend:
          service:
            name: test-app-service
            port:
              name: main-port
        path: /
        pathType: ImplementationSpecific

Create Load Balancer with SecurityGroup

The controller will normally automatically detect the SecurityGroup to use. Auto detection is done by filtering all SecurityGroups with AWS Tags. The kubernetes.io/cluster/<cluster-id> tag of the Security Group should match clusterID for the controller node with value owned and kubernetes:application tag should match the value kube-ingress-aws-controller.

If you want to override the detected SecurityGroup, you can set a SecurityGroup of your choice with the zalando.org/aws-load-balancer-security-group annotation like the shown here:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myingress
  annotations:
    zalando.org/aws-load-balancer-security-group: sg-somegroupeid
spec:
  rules:
  - host: test-app.example.org
    http:
      paths:
      - backend:
          service:
            name: test-app-service
            port:
              name: main-port
        path: /
        pathType: ImplementationSpecific

Create Load Balancers with WAF associations

It is possible to define WAF associations for the created load balancers. The WAF Web ACLs need to be created separately via CloudFormation or the AWS Console, and they can be referenced either as a global startup configuration of the controller, or as ingress specific settings in the ingress object with an annotation. The ingress annotation overrides the global setting, and the controller will create separate load balancers for those ingresses using a separate WAF association.

The controller supports two versions of AWS WAF:

  • WAF (v1 or "classic"): the Web ACL is identified by a UUID
  • WAFv2: the Web ACL is identified by its ARN, prefixed with arn:aws:wafv2:

Only one WAF association can be used for a load balancer, and the same command line flag and ingress annotation is used for both versions, only the format of the value differs.

Starting the controller with global WAF association:
kube-ingress-aws-controller --aws-waf-web-acl-id=arn:aws:wafv2:eu-central-1:123456789012:regional/webacl/test-waf-acl/12345678-abcd-efgh-ijkl-901234567890
Setting ingress specicif WAF association:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myingress
  annotations:
    zalando.org/aws-waf-web-acl-id: arn:aws:wafv2:eu-central-1:123456789012:regional/webacl/test-waf-acl/12345678-abcd-efgh-ijkl-901234567890
spec:
  rules:
  - host: test-app.example.org
    http:
      paths:
      - backend:
          service:
            name: test-app-service
            port:
              name: main-port
        path: /
        pathType: ImplementationSpecific

Deleting load balancers

When the controller detects that a managed load balancer for the current cluster doesn't have a matching ingress resource anymore, it deletes all the previously created resources.

Deletion may take up to about 30 minutes. This ensures proper draining of connections on the loadbalancers and allows for DNS TTLs to expire.

Building

This project provides a Makefile that you can use to build either a binary or a Docker image.

Building a Binary

To build a binary for the Linux operating system, simply run make or make build.linux.

Building a Docker Image

To create a Docker image instead, execute make build.docker. You can then push your Docker image to the Docker registry of your choice.

Deploy

To deploy the ingress controller, use the example YAML as the descriptor. You can customize the image used in the example YAML file.

We provide ghcr.io/zalando-incubator/kube-ingress-aws-controller:latest as a publicly usable Docker image built from this codebase. You can deploy it with 2 easy steps:

  • Replace the placeholder for your region inside the example YAML, for ex., eu-west-1
  • Use kubectl to execute the command kubectl apply -f deploy/ingress-controller.yaml

If you use Kops to create your cluster, please use our deployment guide for Kops

Running multiple instances

In some cases it might be useful to run multiple instances of this controller:

  • Isolating internal vs external traffic
  • Using a different set of traffic processing nodes
  • Using different frontend routers (e.g.: Skipper and Traefik)

You can use the flag --controller-id to set a token that will be used to isolate resources between controller instances. This value will be used to tag those resources.

If you don't pass an ID, the default kube-ingress-aws-controller will be used.

Usually you would want to combine this flag with ingress-class-filter so different types of ingresses are associated with the different controllers. To make kube-ingress-aws-controller manage both specific ingress class and an empty one (or ingresses without ingress class annotation) add an empty class to the list. For example to manage ingress class foo and ingresses without class set parameter like this --ingress-class-filter=foo, (notice the comma in the end).

Ingress classes defined in the spec of ingresses at spec.ingressClassName (Kubernetes Documentation) will take priority over the annotation, if both are supplied. In order to match the default (empty) ingress group, both must be empty."

Target and Health Check Ports

By default the port 9999 is used as both health check and target port. This means that Skipper or any other traffic router you're using needs to be listening on that port.

If you want to change the default ports, you can control it using the -target-port and -health-check-port flags.

If you want to use an HTTPS enabled target port, use the -target-https flag. This will only affect ALBs, NLBs ignore this flag.

HTTP to HTTPS Redirection

By default, the controller will expose both HTTP and HTTPS ports on the load balancer, and forward both listeners to the target port. Setting the flag -redirect-http-to-https will instead configure the HTTP listener to emit a 301 redirect for any request received, with the destination location being the same URL but with the HTTPS scheme vs. HTTP. The specifics are described in the relevant aws documentation.

Backward Compatibility

The controller used to have only the --health-check-port flag available, and would use the same port as health check and the target port. Those ports are now configured individually. If you relied on this behavior, please include the --target-port in your configuration.

AWS CNI Mode (experimental)

The common operation mode of the controller (--target-access-mode=HostPort) is to link the target groups to the autoscaling group. The target group type is instance, requiring the ingress pod to be accessible through a HostNetwork and HostPort.

In AWS CNI Mode (--target-access-mode=AWSCNI) the controller actively manages the target group members. Since AWS EKS cluster running AWS VPC CNI have their pods as first class members in the VPCs, they can receive the traffic directly, being managed through a target group type is ip, which means there is no necessity for the HostPort indirection.

Notes

  • For security reasons the HostPort requirement might be of concern
  • Direct management of the target group members is significantly faster compared to the AWS linked mode, but it requires a running controller for updates. As of now, the controller is not prepared for high availability replicated setup.
  • The registration and deregistration is synced with the pod lifecycle, hence a pod in terminating phase is deregistered from the target group before shut down.
  • Ingress pods are not bound to nodes in CNI mode and the deployment can scale independently.

Configuration options

access mode HostNetwork HostPort Notes
HostPort true true target group updated by ASG, see v0.14.0 release notes
AWSCNI true true PodIP == HostIP: limited scaling and host bound
AWSCNI false true PodIP != HostIP: limited scaling and host bound
AWSCNI false false free scaling, pod VPC CNI IP used

Trying it out

The Ingress Controller's responsibility is limited to managing load balancers, as described above. To have a fully functional setup, additionally to the ingress controller, you can use Skipper to route the traffic to the application. The setup follows what's described here.

You can deploy skipper as a DaemonSet using another example YAML by executing the following command:

kubectl apply -f deploy/skipper.yaml

To complete the setup, you'll need to fulfill some additional requirements regarding security groups and IAM roles; more info here.

DNS

To have convenient DNS names for your application, you can use the Kubernetes-Incubator project, external-dns. It's not strictly necessary for this Ingress Controller to work, though.

Contributing

We welcome your contributions, ideas and bug reports via issues and pull requests; here are those Contributor guidelines again.

Contact

Check our MAINTAINERS file for email addresses.

Security

We welcome your security reports please checkout our SECURITY.md.

License

The MIT License (MIT) Copyright Β© [2017] Zalando SE, https://tech.zalando.com

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the β€œSoftware”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED β€œAS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

More Repositories

1

graphql-jit

GraphQL execution using a JIT compiler
TypeScript
1,048
star
2

kopf

A Python framework to write Kubernetes operators in just few lines of code.
Python
969
star
3

kubernetes-on-aws

Deploying Kubernetes on AWS with CloudFormation and Ubuntu
Go
624
star
4

kube-metrics-adapter

General purpose metrics adapter for Kubernetes HPA metrics
Go
521
star
5

es-operator

Kubernetes Operator for Elasticsearch
Go
352
star
6

hexo-theme-doc

A documentation theme for the Hexo blog framework
JavaScript
247
star
7

cluster-lifecycle-manager

Cluster Lifecycle Manager (CLM) to provision and update multiple Kubernetes clusters
Go
230
star
8

docker-locust

Docker image for the Locust.io open source load testing tool
Python
205
star
9

remora

Kafka consumer lag-checking application for monitoring, written in Scala and Akka HTTP; a wrap around the Kafka consumer group command. Integrations with Cloudwatch and Datadog. Authentication recently added
Scala
197
star
10

stackset-controller

Opinionated StackSet resource for managing application life cycle and traffic switching in Kubernetes
Go
171
star
11

kube-aws-iam-controller

Distribute different AWS IAM credentials to different pods in Kubernetes via secrets.
Go
157
star
12

tessellate

Server-side React render service.
JavaScript
151
star
13

transformer

A tool to transform/convert web browser sessions (HAR files) into Locust load testing scenarios (locustfile).
Python
99
star
14

bro-q

Chrome Extension for JSON formatting and jq filtering in your browser.
TypeScript
83
star
15

spark-json-schema

JSON schema parser for Apache Spark
Scala
81
star
16

catwatch

A metrics dashboard for GitHub organizations, with results accessible via REST API
Java
59
star
17

authmosphere

A library to support OAuth2 workflows in JavaScript projects
TypeScript
54
star
18

banknote

A simple JavaScript libary for formatting currency amounts according to Unicode CLDR standards
JavaScript
46
star
19

flatjson

A fast JSON parser (and builder)
Java
45
star
20

perron

A sane node.js client for web services
JavaScript
44
star
21

zelt

A command-line tool for orchestrating the deployment of Locust in Kubernetes.
Python
36
star
22

hexo-theme-doc-seed

skeleton structure for a documentation website using Hexo and the hexo-doc-theme
29
star
23

kubernetes-log-watcher

Kubernetes log watcher for Scalyr and AppDynamics
Python
27
star
24

new-project

Template to use when creating a new open source project. It comes with all the standard files which there is expected to be in an open source project on Github.
24
star
25

darty

Data dependency manager
Python
22
star
26

chisel

βš’οΈ collection of awesome practices for putting things on pedestal
Clojure
20
star
27

fabric-gateway

An API Gateway built on the Skipper Ingress Controller https://github.com/zalando/skipper
Scala
17
star
28

roadblock

A node.js application for pulling github organisation statistics into a database.
JavaScript
16
star
29

ember-dressy-table

An ember addon for dynamic tables
JavaScript
10
star
30

zalando.github.io-dev

The zalando.github.io open-source metrics dashboard
JavaScript
10
star
31

atlas-js-core

JavaScript SDK Core for Zalando Checkout, Guest Checkout, and Catalog APIs
JavaScript
9
star
32

opentracing-sqs-java

An attempt at a simple SQS helper library for OpenTracing support.
Java
8
star
33

clin

Cli for Nakadi for event types and subscriptions management
Python
7
star
34

play-etcd-watcher

Instantaneous etcd directory listener for Scala Play
Scala
6
star
35

Zincr

Zincr is a Github bot built with Probot to enforce approvals, specification and licensing checks
TypeScript
5
star
36

jzon

Apis for working with json
Java
5
star
37

Trafficlight

Node.js CLI for creating and migrating Github projects, ensuring that it follows a consistent model for permissions, teams and boilerplate files.
JavaScript
1
star