• Stars
    star
    449
  • Rank 97,328 (Top 2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created about 9 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Marathon-lb is a service discovery & load balancing tool for DC/OS

marathon-lb Build Status

Marathon-lb is a tool for managing HAProxy, by consuming Marathon's app state. HAProxy is a fast, efficient, battle-tested, highly available load balancer with many advanced features which power a number of high-profile websites.

Features

  • Stateless design: no direct dependency on any third-party state store like ZooKeeper or etcd (except through Marathon)
  • Idempotent and deterministic: scales horizontally
  • Highly scalable: can achieve line-rate per instance, with multiple instances providing fault-tolerance and greater throughput
  • Real-time LB updates, via Marathon's event bus
  • Support for Marathon's health checks
  • Multi-cert TLS/SSL support
  • Zero-downtime deployments
  • Per-service HAProxy templates
  • DC/OS integration
  • Automated Docker image builds (mesosphere/marathon-lb)
  • Global HAProxy templates which can be supplied at launch
  • Supports IP-per-task integration, such as Project Calico
  • Includes tini zombies reaper

Getting Started

Take a look at the marathon-lb wiki for example usage, templates, and more.

Architecture

The marathon-lb script marathon_lb.py connects to the marathon API to retrieve all running apps, generates a HAProxy config and reloads HAProxy. By default, marathon-lb binds to the service port of every application and sends incoming requests to the application instances.

Services are exposed on their service port (see Service Discovery & Load Balancing for reference) as defined in their Marathon definition. Furthermore, apps are only exposed on LBs which have the same LB tag (or group) as defined in the Marathon app's labels (using HAPROXY_GROUP). HAProxy parameters can be tuned by specify labels in your app.

To create a virtual host or hosts the HAPROXY_{n}_VHOST label needs to be set on the given application. Applications with a vhost set will be exposed on ports 80 and 443, in addition to their service port. Multiple virtual hosts may be specified in HAPROXY_{n}_VHOST using a comma as a delimiter between hostnames.

All applications are also exposed on port 9091, using the X-Marathon-App-Id HTTP header. See the documentation for HAPROXY_HTTP_FRONTEND_APPID_HEAD in the templates section

You can access the HAProxy statistics via :9090/haproxy?stats, and you can retrieve the current HAProxy config from the :9090/_haproxy_getconfig endpoint.

Deployment

The package is currently available from the universe. To deploy marathon-lb on the public slaves in your DC/OS cluster, simply run:

dcos package install marathon-lb

To configure a custom ssl-certificate, set the dcos cli option ssl-cert to your concatenated cert and private key in .pem format. For more details see the HAProxy documentation.

For further customization, templates can be added by pointing the dcos cli option template-url to a tarball containing a directory templates/. See comments in script on how to name those.

Docker

Synopsis: docker run -e PORTS=$portnumber --net=host mesosphere/marathon-lb sse|poll ...

You must set PORTS environment variable to allow haproxy bind to this port. Syntax: docker run -e PORTS=9090 mesosphere/marathon-lb sse [other args]

You can pass in your own certificates for the SSL frontend by setting the HAPROXY_SSL_CERT environment variable. If you need more than one certificate you can specify additional ones by setting HAPROXY_SSL_CERT0 - HAPROXY_SSL_CERT100.

sse mode

In SSE mode, the script connects to the marathon events endpoint to get notified about state changes. This only works with Marathon 0.11.0 or newer versions.

Syntax: docker run mesosphere/marathon-lb sse [other args]

poll mode

If you can't use the HTTP callbacks, the script can poll the APIs to get the schedulers state periodically.

Syntax: docker run mesosphere/marathon-lb poll [other args]

To change the poll interval (defaults to 60s), you can set the POLL_INTERVAL environment variable.

Direct Invocation

You can also run the update script directly. To generate an HAProxy configuration from Marathon running at localhost:8080 with the marathon_lb.py script, run:

$ ./marathon_lb.py --marathon http://localhost:8080 --group external --strict-mode --health-check

It is possible to pass --auth-credentials= option if your Marathon requires authentication:

$ ./marathon_lb.py --marathon http://localhost:8080 --auth-credentials=admin:password

It is possible to get the auth credentials (user & password) from VAULT if you define the following environment variables before running marathon-lb: VAULT_TOKEN, VAULT_HOST, VAULT_PORT, VAULT_PATH where VAULT_PATH is the root path where your user and password are located.

This will refresh haproxy.cfg, and if there were any changes, then it will automatically reload HAProxy. Only apps with the label HAPROXY_GROUP=external will be exposed on this LB.

marathon_lb.py has a lot of additional functionality like sticky sessions, HTTP to HTTPS redirection, SSL offloading, virtual host support and templating capabilities.

To get the full documentation run:

$ ./marathon_lb.py --help

Providing SSL Certificates

You can provide your SSL certificate paths to be placed in frontend marathon_https_in section with --ssl-certs.

$ ./marathon_lb.py --marathon http://localhost:8080 --group external --ssl-certs /etc/ssl/site1.co,/etc/ssl/site2.co --health-check --strict-mode

If you are using the script directly, you have two options:

  • Provide nothing and config will use /etc/ssl/cert.pem as the certificate path. Put the certificate in this path or edit the file for the correct path.
  • Provide --ssl-certs command line argument and config will use these paths.

If you are using the provided run script or Docker image, you have three options:

  • Provide your certificate text in HAPROXY_SSL_CERT environment variable. Contents will be written to /etc/ssl/cert.pem. Config will use this path unless you specified extra certificate paths as in the next option.
  • Provide SSL certificate paths with --ssl-certs command line argument. Your config will use these certificate paths.
  • Provide nothing and it will create self-signed certificate on /etc/ssl/cert.pem and config will use it.

Skipping Configuration Validation

You can skip the configuration file validation (via calling HAProxy service) process if you don't have HAProxy installed. This is especially useful if you are running HAProxy on Docker containers.

$ ./marathon_lb.py --marathon http://localhost:8080 --group external --skip-validation

Using HAProxy Maps for Backend Lookup

You can use HAProxy maps to speed up web application (vhosts) to backend lookup. This is very useful for large installations where the traditional vhost to backend rules comparison takes considerable time since it sequentially compares each rule. HAProxy map creates a hash based lookup table so its fast compared to the other approach, this is supported in marathon-lb using --haproxy-map flag.

$ ./marathon_lb.py --marathon http://localhost:8080 --group external --haproxy-map

Currently it creates a lookup dictionary only for host header (both HTTP and HTTPS) and X-Marathon-App-Id header. But for path based routing and auth, it uses the usual backend rules comparison.

API Endpoints

Marathon-lb exposes a few endpoints on port 9090 (by default). They are:

Endpoint Description
:9090/haproxy?stats HAProxy stats endpoint. This produces an HTML page which can be viewed in your browser, providing various statistics about the current HAProxy instance.
:9090/haproxy?stats;csv This is a CSV version of the stats above, which can be consumed by other tools. For example, it's used in the zdd.py script.
:9090/_haproxy_health_check HAProxy health check endpoint. Returns 200 OK if HAProxy is healthy.
:9090/_haproxy_getconfig Returns the HAProxy config file as it was when HAProxy was started. Implemented in getconfig.lua.
:9090/_haproxy_getvhostmap Returns the HAProxy vhost to backend map. This endpoint returns HAProxy map file only when the --haproxy-map flag is enabled, it returns an empty string otherwise. Implemented in getmaps.lua.
:9090/_haproxy_getappmap Returns the HAProxy app ID to backend map. Like _haproxy_getvhostmap, this requires the --haproxy-map flag to be enabled and returns an empty string otherwise. Also implemented in getmaps.lua.
:9090/_haproxy_getpids Returns the PIDs for all HAProxy instances within the current process namespace. This literally returns $(pidof haproxy). Implemented in getpids.lua. This is also used by the zdd.py script to determine if connections have finished draining during a deploy.
:9090/_mlb_signal/hup* Sends a SIGHUP signal to the marathon-lb process, causing it to fetch the running apps from Marathon and reload the HAProxy config as though an event was received from Marathon.
:9090/_mlb_signal/usr1* Sends a SIGUSR1 signal to the marathon-lb process, causing it to restart HAProxy with the existing config, without checking Marathon for changes.
:9090/metrics Exposes HAProxy metrics in prometheus format.

* These endpoints won't function when marathon-lb is in poll mode as there is no marathon-lb process to be signaled in this mode (marathon-lb exits after each poll).

HAProxy Configuration

App Labels

App labels are specified in the Marathon app definition. These can be used to override HAProxy behaviour. For example, to specify the external group for an app with a virtual host named service.mesosphere.com:

{
  "id": "http-service",
  "labels": {
    "HAPROXY_GROUP":"external",
    "HAPROXY_0_VHOST":"service.mesosphere.com"
  }
}

Some labels are specified per service port. These are denoted with the {n} parameter in the label key, where {n} corresponds to the service port index, beginning at 0.

See the configuration doc for the full list of labels.

Templates

Marathon-lb global templates (as listed in the Longhelp) can be overwritten in two ways: -By creating an environment variable in the marathon-lb container -By placing configuration files in the templates/ directory (relative to where the script is run from)

For example, to replace HAPROXY_HTTPS_FRONTEND_HEAD with this content:

frontend new_frontend_label
  bind *:443 ssl crt /etc/ssl/cert.pem
  mode http

Then this environment variable could be added to the Marathon-LB configuration:

"HAPROXY_HTTPS_FRONTEND_HEAD": "\\nfrontend new_frontend_label\\n  bind *:443 ssl {sslCerts}\\n  mode http"

Alternately, a file calledHAPROXY_HTTPS_FRONTEND_HEAD could be placed in templates/ directory through the use of an artifact URI.

Additionally, some templates can also be overridden per app service port. You may add your own templates to the Docker image, or provide them at startup.

See the configuration doc for the full list of templates.

Overridable Templates

Some templates may be overridden using app labels, as per the labels section. Strings are interpreted as literal HAProxy configuration parameters, with substitutions respected (as per the templates section). The HAProxy configuration will be validated for correctness before reloading HAProxy after changes. Note: Since the HAProxy config is checked before reloading, if an app's HAProxy labels aren't syntactically correct, HAProxy will not be reloaded and may result in stale config.

Here is an example for a service called http-service which requires that http-keep-alive be disabled:

{
  "id": "http-service",
  "labels":{
    "HAPROXY_GROUP":"external",
    "HAPROXY_0_BACKEND_HTTP_OPTIONS":"  option forwardfor\n  no option http-keep-alive\n  http-request set-header X-Forwarded-Port %[dst_port]\n  http-request add-header X-Forwarded-Proto https if { ssl_fc }\n"
  }
}

The full list of per service port templates which can be specified are documented here.

HAProxy Global Default Options

As a shortcut to add haproxy global default options (without overriding the global template) a comma-separated list of options may be specified via the HAPROXY_GLOBAL_DEFAULT_OPTIONS environment variable. The default value when not specified is redispatch,http-server-close,dontlognull; as an example, to add the httplog option (and keep the existing defaults), one should specify HAPROXY_GLOBAL_DEFAULT_OPTIONS=redispatch,http-server-close,dontlognull,httplog.

  • Note that this setting has no effect when the HAPROXY_HEAD template has been overridden.

Operational Best Practices

  • Use service ports within the reserved range (which is 10000 to 10100 by default). This will prevent port conflicts, and ensure reloads don't result in connection errors.
  • Avoid using the HAPROXY_{n}_PORT label; prefer defining service ports.
  • Consider running multiple marathon-lb instances. In practice, 3 or more should be used to provide high availability for production workloads. Running 1 instance is never recommended, and unless you have significant load running more than 5 instances may not add value. The number of MLB instances you run will vary depending on workload and the amount of failure tolerance required. Note: do not run marathon-lb on every node in your cluster. This is considered an anti-pattern due to the implications of hammering the Marathon API and excess health checking.
  • Consider using a dedicated load balancer in front of marathon-lb to permit upgrades/changes. Common choices include an ELB (on AWS) or a hardware load balancer for on-premise installations.
  • Use separate marathon-lb groups (specified with --group) for internal and external load balancing. On DC/OS, the default group is external. A simple options.json for an internal load balancer would be:
  {
    "marathon-lb": {
      "name": "marathon-lb-internal",
      "haproxy-group": "internal",
      "bind-http-https": false,
      "role": ""
    }
  }
  • For HTTP services, consider setting VHost (and optionally a path) to access the service on ports 80 and 443. Alternatively, the service can be accessed on port 9091 using the X-Marathon-App-Id header. For example, to access an app with the ID tweeter:
$ curl -vH "X-Marathon-App-Id: /tweeter" marathon-lb.marathon.mesos:9091/
*   Trying 10.0.4.74...
* Connected to marathon-lb.marathon.mesos (10.0.4.74) port 9091 (#0)
> GET / HTTP/1.1
> Host: marathon-lb.marathon.mesos:9091
> User-Agent: curl/7.48.0
> Accept: */*
> X-Marathon-App-Id: /tweeter
>
< HTTP/1.1 200 OK
  • Some of the features of marathon-lb assume that it is the only instance of itself running in a PID namespace. i.e. marathon-lb assumes that it is running in a container. Certain features like the /_mlb_signal endpoints and the /_haproxy_getpids endpoint (and by extension, zero-downtime deployments) may behave unexpectedly if more than one instance of marathon-lb is running in the same PID namespace or if there are other HAProxy processes in the same PID namespace.
  • Sometimes it is desirable to get detailed container and HAProxy logging for easier debugging as well as viewing connection logging to frontends and backends. This can be achieved by setting the HAPROXY_SYSLOGD environment variable or container-syslogd value in options.json like so:
  {
    "marathon-lb": {
      "container-syslogd": true
    }
  }

Zero-downtime Deployments

  • Please note that zdd.py is not to be used in a production environment and is purely developed for demonstration purposes.

Marathon-lb is able to perform canary style blue/green deployment with zero downtime. To execute such deployments, you must follow certain patterns when using Marathon.

The deployment method is described in this Marathon document. Marathon-lb provides an implementation of the aforementioned deployment method with the script zdd.py. To perform a zero downtime deploy using zdd.py, you must:

  • Specify the HAPROXY_DEPLOYMENT_GROUP and HAPROXY_DEPLOYMENT_ALT_PORT labels in your app template
    • HAPROXY_DEPLOYMENT_GROUP: This label uniquely identifies a pair of apps belonging to a blue/green deployment, and will be used as the app name in the HAProxy configuration
    • HAPROXY_DEPLOYMENT_ALT_PORT: An alternate service port is required because Marathon requires service ports to be unique across all apps
  • Only use 1 service port: multiple ports are not yet implemented
  • Use the provided zdd.py script to orchestrate the deploy: the script will make API calls to Marathon, and use the HAProxy stats endpoint to gracefully terminate instances
  • The marathon-lb container must be run in privileged mode (to execute iptables commands) due to the issues outlined in the excellent blog post by the Yelp engineering team found here
  • If you have long-lived TCP connections using the same HAProxy instances, it may cause the deploy to take longer than necessary. The script will wait up to 5 minutes (by default) for connections to drain from HAProxy between steps, but any long-lived TCP connections will cause old instances of HAProxy to stick around.

An example minimal configuration for a test instance of nginx is included here. You might execute a deployment from a CI tool like Jenkins with:

./zdd.py -j 1-nginx.json -m http://master.mesos:8080 -f -l http://marathon-lb.marathon.mesos:9090 --syslog-socket /dev/null

Zero downtime deployments are accomplished through the use of a Lua module, which reports the number of HAProxy processes which are currently running by hitting the stats endpoint at the /_haproxy_getpids. After a restart, there will be multiple HAProxy PIDs until all remaining connections have gracefully terminated. By waiting for all connections to complete, you may safely and deterministically drain tasks. A caveat of this, however, is that if you have any long-lived connections on the same LB, HAProxy will continue to run and serve those connections until they complete, thereby breaking this technique.

The ZDD script includes the ability to specify a pre-kill hook, which is executed before draining tasks are terminated. This allows you to run your own automated checks against the old and new app before the deploy continues.

Traffic Splitting Between Blue/Green Apps

Zdd has support to split the traffic between two versions of same app (version 'blue' and version 'green') by having instances of both versions live at the same time. This is supported with the help of the HAPROXY_DEPLOYMENT_NEW_INSTANCES label.

When you run zdd with the --new-instances flag, it creates only the specified number of instances of the new app, and deletes the same number of instances from the old app (instead of the normal, create all instances in new and delete all from old approach), to ensure that the number of instances in new app and old app together is equal to HAPROXY_DEPLOYMENT_TARGET_INSTANCES.

Example: Consider the same nginx app example where there are 10 instances of nginx running image version v1, now we can use zdd to create 2 instances of version v2, and retain 8 instances of V1 so that traffic is split in ratio 80:20 (old:new).

Creating 2 instances with new version automatically deletes 2 instances in existing version. You could do this using the following command:

$ ./zdd.py -j 1-nginx.json -m http://master.mesos:8080 -f -l http://marathon-lb.marathon.mesos:9090 --syslog-socket /dev/null --new-instances 2

This state where you have instances of both old and new versions of same app live at the same time is called hybrid state.

When a deployment group is in hybrid state, it needs to be converted into completely current version or completely previous version before deploying any further versions, this could be done with the help of the --complete-cur and --complete-prev flags in zdd.

When you run the below command, it converts all instances to new version so that traffic split ratio becomes 0:100 (old:new) and it deletes the old app. This is graceful as it follows usual zdd procedure of waiting for tasks/instances to drain before deleting them.

$ ./zdd.py -j 1-nginx.json -m http://master.mesos:8080 -f -l http://marathon-lb.marathon.mesos:9090 --syslog-socket /dev/null --complete-cur

Similarly you can use --complete-prev flag to convert all instances to old version (and this is essentially a rollback) so that traffic split ratio becomes 100:0 (old:new) and it deletes the new app.

Currently only one hop of traffic split is supported, so you can specify the number of new instances (directly proportional to traffic split ratio) only when app is having all instances of same version (completely blue or completely green). This implies --new-instances flag cannot be specified in hybrid mode to change traffic split ratio (instance ratio) as updating Marathon label (HAPROXY_DEPLOYMENT_NEW_INSTANCES) currently triggers new deployment in marathon which will not be graceful. Currently for the example mentioned, the traffic split ratio is 100:0 -> 80:20 -> 0:100, where there is only one hop when both versions get traffic simultaneously.

Mesos with IP-per-task Support

Marathon-lb supports load balancing for applications that use the Mesos IP-per-task feature, whereby each task is assigned unique, accessible, IP addresses. For these tasks services are directly accessible via the configured discovery ports and there is no host port mapping. Note, that due to limitations with Marathon (see mesosphere/marathon#3636) configured service ports are not exposed to marathon-lb for IP-per-task apps.

For these apps, if the service ports are missing from the Marathon app data, marathon-lb will automatically assign port values from a configurable range if you specify it. The range is configured using the --min-serv-port-ip-per-task and --max-serv-port-ip-per-task options. While port assignment is deterministic, the assignment is not guaranteed if you change the current set of deployed apps. In other words, when you deploy a new app, the port assignments may change.

Zombie reaping

When running with isolated containers, you may need to take care of reaping orphaned child processes. HAProxy typically produces orphan processes because of its two-step reload mechanism. Marathon-LB uses tini for this purpose. When running in a container without PID namespace isolation, setting the TINI_SUBREAPER environment variable is recommended.

Contributing

PRs are welcome, but here are a few general guidelines:

  • Avoid making changes which may break existing behaviour

  • Document new features

  • Update/include tests for new functionality. To install dependencies and run tests:

    pip install -r requirements-dev.txt
    nosetests
    
  • Use the pre-commit hook to automatically generate docs:

    bash /path/to/marathon-lb/scripts/install-git-hooks.sh
    

Using the Makefile and docker for developement and testing

Running unit and integration tests is automated as make targets. Docker is required to use the targets as it will run all tests in containers.

Several environment variables can be set to control the image tags, DCOS version/variant, etc. Check the top of the Makefile for more info.

To run the unit tests:

make test-unit

To run the integration tests a DCOS installation will be started via dcos-e2e. The installation of dcos-e2e and management of the cluster will all be done in docker containers. Since the installers are rather large downloads, it is benificial to specify a value for DCOS_E2E_INSTALLERS_DIR. By default DCOS_E2E_INSTALLERS_DIR is inside the .cache directory that will be removed upon make clean. You must provide a repository for the resultant docker image to be pushed to via the CONTAINTER_REPO environemnt variable. It is assumed that the local docker is already logged in and the image will be pushed prior to launching the cluster.

To run the integration tests on the OSS variant of DCOS:

DCOS_E2E_INSTALLERS_DIR="${HOME}/dcos/installers" \
CONTAINTER_REPO="my_docker_user/my-marathon-lb-repo" make test-integration

To run the integration tests on the ENTERPRISE variant of DCOS:

DCOS_LICENSE_KEY_PATH=${HOME}/license.txt \
DCOS_E2E_VARIANT=enterprise \
DCOS_E2E_INSTALLERS_DIR="${HOME}/dcos/installers"\
CONTAINTER_REPO="my_docker_user/my-marathon-lb-repo" make test-integration

To run both unit and integration tests (add appropriate variables):

CONTAINTER_REPO="my_docker_user/my-marathon-lb-repo" make test

Troubleshooting your development environment setup

FileNotFoundError: [Errno 2] No such file or directory: 'curl-config'

You need to install the curl development package.

# Fedora
dnf install libcurl-devel

# Ubuntu
apt-get install libcurl-dev

ImportError: pycurl: libcurl link-time ssl backend (nss) is different from compile-time ssl backend (openssl)

The pycurl package linked against the wrong SSL backend when you installed it.

pip uninstall pycurl
export PYCURL_SSL_LIBRARY=nss
pip install -r requirements-dev.txt

Swap nss for whatever backend it mentions.

Release Process

  1. Create a Github release. Follow the convention of past releases. You can find something to copy/paste if you hit the "edit" button of a previous release.

  2. The Github release creates a tag, and Dockerhub will build off of that tag.

  3. Make a PR to Universe. The suggested way is to create one commit that only copies the previous dir to a new one, and then a second commit that makes the actual changes. If unsure, check out the previous commits to the marathon-lb directory in Universe.

More Repositories

1

marathon

Deploy and manage containers (including Docker) on top of Apache Mesos at scale.
Scala
4,065
star
2

kubernetes-mesos

A Kubernetes Framework for Apache Mesos
641
star
3

cloudkeeper

Resoto creates an inventory of your cloud, provides deep visibility, and reacts to changes in your infrastructure. ⚑️
Python
637
star
4

mesos-dns

DNS-based service discovery for Mesos.
Go
483
star
5

playa-mesos

Quickly build Mesos sandbox environments using Vagrant. Run apps on top!
Shell
441
star
6

universe

The Mesosphere Universe package repository.
Mustache
303
star
7

chaos

A lightweight framework for writing REST services in Scala.
Scala
251
star
8

RENDLER

A rendering web crawler for Apache Mesos.
Python
246
star
9

marathon-ui

The web-ui for Marathon (https://github.com/mesosphere/marathon)
JavaScript
223
star
10

traefik-forward-auth

Go
214
star
11

mesos-docker

Project has been superseded by native docker support in Mesos
Python
177
star
12

dcos-kubernetes-quickstart

Quickstart guide for Kubernetes on DC/OS
HCL
168
star
13

dcos-commons

DC/OS SDK is a collection of tools, libraries, and documentation for easy integration of technologies such as Kafka, Cassandra, HDFS, Spark, and TensorFlow with DC/OS.
Java
156
star
14

reactjs-components

🎨 A library of reusable React components
JavaScript
136
star
15

marathon-autoscale

Simple Proof-of-Concept for Scaling Application running on Marathon based on Utilization
Python
110
star
16

dcos-jenkins-service

Jenkins on DC/OS
Python
73
star
17

serenity

Intel:Mesosphere oversubscription technologies for Apache Mesos
C++
71
star
18

tweeter

A tiny Twitter clone for DC/OS
CSS
68
star
19

mesosaurus

Mesos task load simulator framework for (cluster and Mesos) performance analysis
Scala
59
star
20

mindthegap

Easily create and use bundles for air-gapped environments
Go
58
star
21

reactive-graphql

A GraphQL implementation based around RxJS, very well suited for client side only GraphQL usage
TypeScript
57
star
22

net-modules

Apache Mesos modules for network isolation.
Python
55
star
23

konvoy-training

55
star
24

dcos-vagrant-box

Vagrant box packer for building boxes for dcos-vagrant
Shell
54
star
25

csilvm

A LVM2 CSI plugin
Go
53
star
26

spark-build

Used to build the mesosphere/spark docker image and the DC/OS Spark package
Python
53
star
27

docker-mesos-marathon-screencast

The scripts used in the Docker Clustering on Mesos with Marathon screencast.
Shell
51
star
28

dcos-docs-site

D2iQ Product Documentation and Docs Website Code
SCSS
51
star
29

mesos-rxjava

RxJava client for Apache Mesos HTTP APIs
Java
42
star
30

letsencrypt-dcos

Let's Encrypt DC/OS!
Python
39
star
31

cd-demo

A continuous delivery demo using Jenkins on DC/OS.
Python
36
star
32

etcd-top

etcd realtime workload analyzer
Go
34
star
33

tachyon-mesos

A Mesos Framework for Tachyon, a memory-centric distributed file system.
Scala
32
star
34

dcos-kafka-service

Open source Apache Kafka running on DC/OS
Python
32
star
35

kubernetes-security-benchmark

A simple way to evaluate the security of your Kubernetes deployment against sets of best practices defined by various community sources
Go
29
star
36

coreos-setup

Deprecated. See DCOS Community Edition for how to currently deploy Mesos on CoreOS
28
star
37

cnvs

CNVS (pronounced "Canvas") is a system of user interface elements and components built for use across Mesosphere sites and products. CNVS defines stylistic guidelines for the design and structure of digital interfaces in an effort to ensure consistency in brand and interaction.
CSS
28
star
38

mesos-utils

Utilities for building distributed systems on top of mesos
Scala
24
star
39

scala-sbt-mesos-framework.g8

Scala
23
star
40

marathon-example-plugins

Example Plugins for Marathon Plugin Interface
Scala
22
star
41

star

Test program for network policies.
Rust
19
star
42

charts

D2IQ Helm Chart Repository
Mustache
18
star
43

marathon-client

Java Integration Library for Mesosphere Marathon
Java
17
star
44

marathon-pkg

Packaging utilities for Marathon.
17
star
45

mesos-dns-pkg

Packaging utilities for Mesos-DNS
Makefile
16
star
46

konvoy-image-builder

Go
15
star
47

mom

Mesos on Mesos
Go
15
star
48

dcos-openvpn

14
star
49

kommander-applications

Go
13
star
50

sample_mesos_executor

Sample mesos executor
Scala
13
star
51

dklb

Expose Kubernetes services and ingresses through EdgeLB.
Go
12
star
52

usi

Deploy and manage containers (including Docker) on top of Apache Mesos at scale.
Scala
12
star
53

dcos-flink-service

Shell
11
star
54

kubernetes-base-addons

Kubernetes Addon Repository for KSphere
Go
11
star
55

edgerouter

DCOS edgerouter
Python
11
star
56

dcosdev

Python
10
star
57

kudo-spark-operator

KUDO Spark Operator
Shell
10
star
58

jackson-case-class-module

Deserialization support for Scala case classes, including proper handling of default values.
Scala
10
star
59

kudo-cassandra-operator

KUDO Cassandra Operator
Go
10
star
60

mesos-http-adapter

Java
8
star
61

exhibitor-dcos

Exhibitor on DCOS
Shell
8
star
62

ANAGRAMMER

An anagram finder for Apache Mesos
Python
8
star
63

field-notes

7
star
64

cake-builder

Cake Docker Builder
Go
7
star
65

kubeaddons-kommander

Kommander Addon Repository
Go
7
star
66

d2iq-daggers

Collection of tasks and utilities to manage ci-cd pipelines
Go
7
star
67

dcos-helloworld

DCOS HelloWorld subcommand.
Python
6
star
68

docker-screencasts

Shell
6
star
69

chronos-pkg

Makefile
5
star
70

mesos-website-container

Scripts for building docker image for generating mesos.apache.org from sources
Shell
5
star
71

ip_vs_conn

Erlang
5
star
72

docker-mac-network

Shell
5
star
73

d2iq-engineering-blog

Just a techblog test repo for showcasing
SCSS
5
star
74

bun

Command-line program which detects the most common problems in a DC/OS cluster by analyzing its diagnostics bundle
Go
4
star
75

marathon-storage-tool

Marathon Storage Tool
Scala
4
star
76

kubeaddons-enterprise

Enterprise Addon Repository
Python
4
star
77

kubernetes-keygen

Scripts for generating RSA keys and SSL certificates/authorities for use by Kubernetes cluster deployments
Shell
4
star
78

aurora_tutorial

Shell
3
star
79

dispatch-catalog

Dispatch Official Catalog
Python
3
star
80

health-checks-scale-tests

Marathon and Mesos-native health checks testing rig
Python
3
star
81

golang-repository-template

Go
3
star
82

terraform-provider-dcos

a Terraform (http://terraform.io) provider for interacting with Mesosphere DC/OS
Go
3
star
83

marathon-ui-example-plugin

JavaScript
3
star
84

kubeaddons-kaptain

Kubeflow Addons
3
star
85

dynamic-credential-provider

Simplifies using the Kubelet image credential provider feature with multiple cloud infrastructures
Go
2
star
86

dcos-sdk-service-diagnostics

Fetches "SDK Service"-related diagnostics artifacts. Owned by the Data Services and Orchestration teams.
Python
2
star
87

mesosphere-zookeeper

Makefile
2
star
88

mesos-build-images

Shell
2
star
89

dkp-catalog-applications

Makefile
2
star
90

kubernetes-sre-addons

Go
2
star
91

marathon-demo

Resources for Marathon demos
Shell
2
star
92

kubeaddons-community

Community Addon Repository
2
star
93

marathon-integration-tests

A collection of Gatling simulations for Marathon.
Scala
2
star
94

marathon-perf-measurement

2
star
95

marathon-ui-plugin-sdk

2
star
96

dcos-perf-test-driver

πŸ’ͺ The DC/OS Performance and Scale Test Driver
Python
2
star
97

mesos-state-backed-collections

Persistent collection types backed by implementations of the Mesos state API.
Scala
2
star
98

kubeaddons-tests

tests for kubeaddons-enterprise catalog addons
Shell
1
star
99

sre-kommander-applications

Community Helm Releases - used for Demos and Internally
Smarty
1
star
100

cp-docker-images

Python
1
star