• Stars
    star
    226
  • Rank 176,514 (Top 4 %)
  • Language
    Go
  • License
    Mozilla Public Li...
  • Created over 5 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A nomad task driver plugin for sandboxing workloads in podman containers

Nomad podman Driver

Many thanks to @towe75 and Pascom for contributing this plugin to Nomad!

Features

  • Use the jobs driver config to define the image for your container
  • Start/stop containers with default or customer entrypoint and arguments
  • Nomad runtime environment is populated
  • Use Nomad alloc data in the container.
  • Bind mount custom volumes into the container
  • Publish ports
  • Monitor the memory consumption
  • Monitor CPU usage
  • Task config cpu value is used to populate podman CpuShares
  • Task config cores value is used to populate podman Cpuset
  • Container log is forwarded to Nomad logger
  • Utilize podmans --init feature
  • Set username or UID used for the specified command within the container (podman --user option).
  • Fine tune memory usage: standard Nomad memory resource plus additional driver specific swap, swappiness and reservation parameters, OOM handling
  • Supports rootless containers with cgroup V2
  • Set DNS servers, searchlist and options via Nomad dns parameters
  • Support for nomad shared network namespaces and consul connect
  • Quite flexible network configuration, allows to simply build pod-like structures within a nomad group

Redis Example job

Here is a simple redis "hello world" Example:

job "redis" {
  datacenters = ["dc1"]
  type        = "service"

  group "redis" {
    network {
      port "redis" { to = 6379 }
    }

    task "redis" {
      driver = "podman"

        config {
          image = "docker://redis"
          ports = ["redis"]
        }

      resources {
        cpu    = 500
        memory = 256
      }
    }
  }
}
nomad run redis.nomad

==> Monitoring evaluation "9fc25b88"
    Evaluation triggered by job "redis"
    Allocation "60fdc69b" created: node "f6bccd6d", group "redis"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "9fc25b88" finished with status "complete"

podman ps

CONTAINER ID  IMAGE                           COMMAND               CREATED         STATUS             PORTS  NAMES
6d2d700cbce6  docker.io/library/redis:latest  docker-entrypoint...  16 seconds ago  Up 16 seconds ago         redis-60fdc69b-65cb-8ece-8554-df49321b3462

Building The Driver from source

This project has a go.mod definition. So you can clone it to whatever directory you want. It is not necessary to setup a go path at all. Ensure that you use go 1.17 or newer.

git clone [email protected]:hashicorp/nomad-driver-podman
cd nomad-driver-podman
make dev

The compiled binary will be located at ./build/nomad-driver-podman.

Runtime dependencies

  • Nomad 0.12.9+
  • Linux host with podman installed
  • For rootless containers you need a system supporting cgroup V2 and a few other things, follow this tutorial

You need a 3.0.x podman binary and a system socket activation unit, see https://www.redhat.com/sysadmin/podmans-new-rest-api

Nomad agent, nomad-driver-podman and podman will reside on the same host, so you do not have to worry about the ssh aspects of the podman api.

Ensure that Nomad can find the plugin, see plugin_dir

Driver Configuration

  • volumes stanza:

    • enabled - Defaults to true. Allows tasks to bind host paths (volumes) inside their container.
    • selinuxlabel - Allows the operator to set a SELinux label to the allocation and task local bind-mounts to containers. If used with volumes.enabled set to false, the labels will still be applied to the standard binds in the container.
plugin "nomad-driver-podman" {
  config {
    volumes {
      enabled      = true
      selinuxlabel = "z"
    }
  }
}
  • gc stanza:

    • container - Defaults to true. This option can be used to disable Nomad from removing a container when the task exits.
plugin "nomad-driver-podman" {
  config {
    gc {
      container = false
    }
  }
}
  • recover_stopped (bool) Defaults to false. Allows the driver to start and reuse a previously stopped container after a Nomad client restart. Consider a simple single node system and a complete reboot. All previously managed containers will be reused instead of disposed and recreated.

    WARNING - use of recover_stopped may cause Nomad agent to not start on system restarts. This setting has been left in place for compatibility.

plugin "nomad-driver-podman" {
  config {
    recover_stopped = true
  }
}
  • socket_path (string) Defaults to "unix:///run/podman/podman.sock" when running as root or a cgroup V1 system, and "unix:///run/user/<USER_ID>/podman/podman.sock" for rootless cgroup V2 systems
plugin "nomad-driver-podman" {
  config {
    socket_path = "unix:///run/podman/podman.sock"
  }
}
  • disable_log_collection (string) Defaults to false. Setting this to true will disable Nomad logs collection of Podman tasks. If you don't rely on nomad log capabilities and exclusively use host based log aggregation, you may consider this option to disable nomad log collection overhead. Beware to you also loose automatic log rotation.
plugin "nomad-driver-podman" {
  config {
    disable_log_collection = false
  }
}
  • extra_labels ([]string) Defaults to []. Setting this will automatically append Nomad-related labels to Podman tasks. Supports glob matching such as task*. Possible values are:
job_name
job_id
task_group_name
task_name
namespace
node_name
node_id
plugin "nomad-driver-podman" {
  config {
    extra_labels = ["job_name", "job_id", "task_group_name", "task_name", "namespace", "node_name", "node_id"]
  }
}
  • client_http_timeout (string) Defaults to 60s default timeout used by http.Client requests
plugin "nomad-driver-podman" {
  config {
    client_http_timeout = "60s"
  }

Task Configuration

  • image - The image to run. Accepted transports are docker (default if missing), oci-archive and docker-archive. Images reference as short-names will be treated according to user-configured preferences.
config {
  image = "docker://redis"
}
  • auth - (Optional) Authenticate to the image registry using a static credential. tls_verify can be disabled for insecure registries.
config {
  image = "your.registry.tld/some/image"
  auth {
    username   = "someuser"
    password   = "sup3rs3creT"
    tls_verify = true
  }
}
  • entrypoint - (Optional) A string list overriding the image's entrypoint. Defaults to the entrypoint set in the image.
config {
  entrypoint = [
    "/bin/bash",
    "-c"
  ]
}
  • command - (Optional) The command to run when starting the container.
config {
  command = "some-command"
}
  • args - (Optional) A list of arguments to the optional command. If no command is specified, the arguments are passed directly to the container.
config {
  args = [
    "arg1",
    "arg2",
  ]
}
  • working_dir - (Optional) The working directory for the container. Defaults to the default set in the image.
config {
  working_dir = "/data"
}
  • volumes - (Optional) A list of host_path:container_path:options strings to bind host paths to container paths. Named volumes are not supported.
config {
  volumes = [
    "/some/host/data:/container/data:ro,noexec"
  ]
}
  • tmpfs - (Optional) A list of /container_path strings for tmpfs mount points. See podman run --tmpfs options for details.
config {
  tmpfs = [
    "/var"
  ]
}
  • devices - (Optional) A list of host-device[:container-device][:permissions] definitions. Each entry adds a host device to the container. Optional permissions can be used to specify device permissions, it is combination of r for read, w for write, and m for mknod(2). See podman documentation for more details.
config {
  devices = [
    "/dev/net/tun"
  ]
}
  • hostname - (Optional) The hostname to assign to the container. When launching more than one of a task (using count) with this option set, every container the task starts will have the same hostname.

  • Forwarding and Exposing Ports - (Optional) See Docker Driver Configuration for details.

  • init - Run an init inside the container that forwards signals and reaps processes.

config {
  init = true
}
  • init_path - Path to the container-init binary.
config {
  init = true
  init_path = /usr/libexec/podman/catatonit
}
  • user - Run the command as a specific user/uid within the container. See Task configuration
user = nobody

config {
}
  • logging - Configure logging. See also plugin option disable_log_collection

driver = "nomad" (default) Podman redirects its combined stdout/stderr logstream directly to a Nomad fifo. Benefits of this mode are: zero overhead, don't have to worry about log rotation at system or Podman level. Downside: you cannot easily ship the logstream to a log aggregator plus stdout/stderr is multiplexed into a single stream..

config {
  logging = {
    driver = "nomad"
  }
}

driver = "journald" The container log is forwarded from Podman to the journald on your host. Next, it's pulled by the Podman API back from the journal into the Nomad fifo (controllable by disable_log_collection) Benefits: all containers can log into the host journal, you can ship a structured stream incl. metadata to your log aggregator. No log rotation at Podman level. You can add additional tags to the journal. Drawbacks: a bit more overhead, depends on Journal (will not work on WSL2). You should configure some rotation policy for your Journal. Ensure you're running Podman 3.1.0 or higher because of bugs in older versions.

config {
  logging = {
    driver = "journald"
    options = [
      {
        "tag" = "redis"
      }
    ]
  }
}
  • memory_reservation - Memory soft limit (nit = b (bytes), k (kilobytes), m (megabytes), or g (gigabytes))

After setting memory reservation, when the system detects memory contention or low memory, containers are forced to restrict their consumption to their reservation. So you should always set the value below --memory, otherwise the hard limit will take precedence. By default, memory reservation will be the same as memory limit.

config {
  memory_reservation = "100m"
}
  • memory_swap - A limit value equal to memory plus swap. The swap LIMIT should always be larger than the memory value.

Unit can be b (bytes), k (kilobytes), m (megabytes), or g (gigabytes). If you don't specify a unit, b is used. Set LIMIT to -1 to enable unlimited swap.

config {
  memory_swap = "180m"
}
  • memory_swappiness - Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
config {
  memory_swappiness = 60
}

By default the task uses the network stack defined in the task group, see network Stanza. If the groups network behavior is also undefined, it will fallback to bridge in rootful mode or slirp4netns for rootless containers.

  • bridge: create a network stack on the default podman bridge.
  • none: no networking
  • host: use the Podman host network stack. Note: the host mode gives the container full access to local system services such as D-bus and is therefore considered insecure
  • slirp4netns: use slirp4netns to create a user network stack. This is the default for rootless containers. Podman currently does not support it for root containers issue.
  • container:id: reuse another podman containers network stack
  • task:name-of-other-task: join the network of another task in the same allocation.
config {
  network_mode = "bridge"
}
  • cap_add - (Optional) A list of Linux capabilities as strings to pass to --cap-add.
config {
  cap_add = [
    "SYS_TIME"
  ]
}
  • cap_drop - (Optional) A list of Linux capabilities as strings to pass to --cap-drop.
config {
  cap_add = [
    "MKNOD"
  ]
}
  • selinux_opts - (Optional) A list of process labels the container will use.
config {
  selinux_opts = [
    "type:my_container.process"
  ]
}
  • sysctl - (Optional) A key-value map of sysctl configurations to set to the containers on start.
config {
  sysctl = {
    "net.core.somaxconn" = "16384"
  }
}
  • privileged - (Optional) true or false (default). A privileged container turns off the security features that isolate the container from the host. Dropped Capabilities, limited devices, read-only mount points, Apparmor/SELinux separation, and Seccomp filters are all disabled.

  • tty - (Optional) true or false (default). Allocate a pseudo-TTY for the container.

  • labels - (Optional) Set labels on the container.

config {
  labels = {
    "nomad" = "job"
  }
}
  • apparmor_profile - (Optional) Name of a apparmor profile to be used instead of the default profile. The special value unconfined disables apparmor for this container:
config {
  apparmor_profile = "your-profile"
}
  • force_pull - (Optional) true or false (default). Always pull the latest image on container start.
config {
  force_pull = true
}
  • readonly_rootfs - (Optional) true or false (default). Mount the rootfs as read-only.
config {
  readonly_rootfs = true
}
  • ulimit - (Optional) A key-value map of ulimit configurations to set to the containers to start.
config {
  ulimit {
    nproc = "4242"
    nofile = "2048:4096"
  }
config {
  userns = "keep-id:uid=200,gid=210"
}
  • pids_limit - (Optional) An integer value that specifies the pid limit for the container.
config {
  pids_limit = 64
}
  • image_pull_timeout - (Optional) time duration for your pull timeout (default to 5m).
config {
  image_pull_timeout = "5m"
}

Network Configuration

nomad lifecycle hooks combined with the drivers network_mode allows very flexible network namespace definitions. This feature does not build upon the native podman pod structure but simply reuses the networking namespace of one container for other tasks in the same group.

A typical example is a network server and a metric exporter or log shipping sidecar. The metric exporter needs access to i.E. a private monitoring Port which should not be exposed the the network and thus is usually bound to localhost.

The repository includes three different examples jobs for such a setup. All of them will start a nats server and a prometheus-nats-exporter using different approaches.

You can use curl to proof that the job is working correctly and that you can get prometheus metrics:

curl http://your-machine:7777/metrics

2 Task setup, server defines the network

See examples/jobs/nats_simple_pod.nomad

Here, the server task is started as main workload and the exporter runs as a poststart sidecar. Because of that, Nomad guarantees that the server is started first and thus the exporter can easily join the servers network namespace via network_mode = "task:server".

Note, that the server configuration file binds the http_port to localhost.

Be aware that ports must be defined in the parent network namespace, here server.

3 Task setup, a pause container defines the network

See examples/jobs/nats_pod.nomad

A slightly different setup is demonstrated in this job. It reassembles more closely the idea of a pod by starting a pause task, named pod via a prestart/sidecar hook.

Next, the main workload, server is started and joins the network namespace by using the network_mode = "task:pod" stanza. Finally, Nomad starts the poststart/sidecar exporter which also joins the network.

Note that all ports must be defined on the pod level.

2 Task setup, shared Nomad network namespace

See examples/jobs/nats_group.nomad

This example is very different. Both server and exporter join a network namespace which is created and managed by Nomad itself. See nomad network stanza to get started with this generic approach.

Rootless on ubuntu

edit /etc/default/grub to enable cgroups v2

GRUB_CMDLINE_LINUX_DEFAULT="quiet cgroup_enable=memory swapaccount=1 systemd.unified_cgroup_hierarchy=1"

sudo update-grub

ensure that podman socket is running

$ systemctl --user status podman.socket
* podman.socket - Podman API Socket
     Loaded: loaded (/usr/lib/systemd/user/podman.socket; disabled; vendor preset: disabled)
     Active: active (listening) since Sat 2020-10-31 19:21:29 CET; 22h ago
   Triggers: * podman.service
       Docs: man:podman-system-service(1)
     Listen: /run/user/1000/podman/podman.sock (Stream)
     CGroup: /user.slice/user-1000.slice/[email protected]/podman.socket

ensure that you have a recent version of crun

$ crun -V
crun version 0.13.227-d38b
commit: d38b8c28fc50a14978a27fa6afc69a55bfdd2c11
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL

nomad job run example.nomad

job "example" {
  datacenters = ["dc1"]
  type        = "service"

  group "cache" {
    count = 1
    restart {
      attempts = 2
      interval = "30m"
      delay    = "15s"
      mode     = "fail"
    }
    network {
      port "redis" { to = 6379 }
    }
    task "redis" {
      driver = "podman"

      config {
        image = "redis"
        ports = ["redis"]
      }

      resources {
        cpu    = 500 # 500 MHz
        memory = 256 # 256MB
      }
    }
  }
}

verify podman ps

$ podman ps
CONTAINER ID  IMAGE                           COMMAND       CREATED        STATUS            PORTS                                                 NAMES
2423ae3efa21  docker.io/library/redis:latest  redis-server  7 seconds ago  Up 6 seconds ago  127.0.0.1:21510->6379/tcp, 127.0.0.1:21510->6379/udp  redis-b640480f-4b93-65fd-7bba-c15722886395

Local Development

Requirements

  • Vagrant >= 2.2
  • VirtualBox >= v6.0

Vagrant Environment Setup

# create the vm
vagrant up

# ssh into the vm
vagrant ssh

Running a Nomad dev agent with the Podman plugin:

# Build the task driver plugin
make dev

# Copy the build nomad-driver-plugin executable to examples/plugins/
cp ./build/nomad-driver-podman examples/plugins/

# Start Nomad
nomad agent -config=examples/nomad/server.hcl 2>&1 > server.log &

# Run the client as sudo
sudo nomad agent -config=examples/nomad/client.hcl 2>&1 > client.log &

# Run a job
nomad job run examples/jobs/redis_ports.nomad

# Verify
nomad job status redis

sudo podman ps

Running the tests:

# Start the Podman server
systemctl --user start podman.socket

# Run the tests
CI=1 ./build/bin/gotestsum --junitfile ./build/test/result.xml -- -timeout=15m . ./api

More Repositories

1

terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
Go
42,550
star
2

vault

A tool for secrets management, encryption as a service, and privileged access management
Go
30,858
star
3

consul

Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure.
Go
28,256
star
4

vagrant

Vagrant is a tool for building and distributing development environments.
Ruby
26,132
star
5

packer

Packer is a tool for creating identical machine images for multiple platforms from a single source configuration.
Go
15,086
star
6

nomad

Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.
Go
14,809
star
7

terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
Go
9,761
star
8

raft

Golang implementation of the Raft consensus protocol
Go
7,383
star
9

serf

Service orchestration and management tool.
Go
5,692
star
10

hcl

HCL is the HashiCorp configuration language.
Go
5,236
star
11

go-plugin

Golang plugin system over RPC.
Go
4,874
star
12

terraform-cdk

Define infrastructure resources using programming constructs and provision them using HashiCorp Terraform
TypeScript
4,868
star
13

waypoint

A tool to build, deploy, and release any application on any platform.
Go
4,789
star
14

consul-template

Template rendering, notifier, and supervisor for @HashiCorp Consul and Vault data.
Go
4,750
star
15

terraform-provider-azurerm

Terraform provider for Azure Resource Manager
Go
4,528
star
16

otto

Development and deployment made easy.
HTML
4,282
star
17

golang-lru

Golang LRU cache
Go
4,221
star
18

boundary

Boundary enables identity-based access management for dynamic infrastructure.
Go
3,850
star
19

memberlist

Golang package for gossip based membership and failure detection
Go
3,303
star
20

go-memdb

Golang in-memory database built on immutable radix trees
Go
2,937
star
21

next-mdx-remote

Load mdx content from anywhere through getStaticProps in next.js
TypeScript
2,405
star
22

terraform-provider-google

Terraform Provider for Google Cloud Platform
Go
2,325
star
23

go-multierror

A Go (golang) package for representing a list of errors as a single error.
Go
2,029
star
24

yamux

Golang connection multiplexing library
Go
2,003
star
25

envconsul

Launch a subprocess with environment variables using data from @HashiCorp Consul and Vault.
Go
1,967
star
26

go-retryablehttp

Retryable HTTP client in Go
Go
1,702
star
27

terraform-provider-kubernetes

Terraform Kubernetes provider
Go
1,580
star
28

go-getter

Package for downloading things from a string URL using a variety of protocols.
Go
1,541
star
29

best-practices

HCL
1,490
star
30

go-version

A Go (golang) library for parsing and verifying versions and version constraints.
Go
1,459
star
31

go-metrics

A Golang library for exporting performance and runtime metrics to external metrics systems (i.e. statsite, statsd)
Go
1,453
star
32

setup-terraform

Sets up Terraform CLI in your GitHub Actions workflow.
JavaScript
1,352
star
33

terraform-guides

Example usage of HashiCorp Terraform
HCL
1,324
star
34

mdns

Simple mDNS client/server library in Golang
Go
1,020
star
35

terraform-provider-helm

Terraform Helm provider
Go
997
star
36

vault-guides

Example usage of HashiCorp Vault secrets management
Shell
990
star
37

go-immutable-radix

An immutable radix tree implementation in Golang
Go
926
star
38

vault-helm

Helm chart to install Vault and other associated components.
Shell
904
star
39

terraform-ls

Terraform Language Server
Go
896
star
40

vscode-terraform

HashiCorp Terraform VSCode extension
TypeScript
870
star
41

levant

An open source templating and deployment tool for HashiCorp Nomad jobs
Go
825
star
42

vault-k8s

First-class support for Vault and Kubernetes.
Go
697
star
43

terraform-exec

Terraform CLI commands via Go.
Go
675
star
44

consul-k8s

First-class support for Consul Service Mesh on Kubernetes
Go
665
star
45

terraform-aws-vault

A Terraform Module for how to run Vault on AWS using Terraform and Packer
HCL
659
star
46

terraform-github-actions

Terraform GitHub Actions
Shell
624
star
47

terraform-provider-vsphere

Terraform Provider for VMware vSphere
Go
617
star
48

raft-boltdb

Raft backend implementation using BoltDB
Go
585
star
49

nextjs-bundle-analysis

A github action that provides detailed bundle analysis on PRs for next.js apps
JavaScript
562
star
50

go-discover

Discover nodes in cloud environments
Go
562
star
51

consul-replicate

Consul cross-DC KV replication daemon.
Go
504
star
52

next-mdx-enhanced

A Next.js plugin that enables MDX pages, layouts, and front matter
JavaScript
500
star
53

docker-vault

Official Docker images for Vault
Shell
492
star
54

terraform-provider-kubernetes-alpha

A Terraform provider for Kubernetes that uses dynamic resource types and server-side apply. Supports all Kubernetes resources.
Go
490
star
55

vault-secrets-operator

The Vault Secrets Operator (VSO) allows Pods to consume Vault secrets natively from Kubernetes Secrets.
Go
466
star
56

terraform-k8s

Terraform Cloud Operator for Kubernetes
Go
454
star
57

puppet-bootstrap

A collection of single-file scripts to bootstrap your machines with Puppet.
Shell
444
star
58

cap

A collection of authentication Go packages related to OIDC, JWKs, Distributed Claims, LDAP
Go
443
star
59

terraform-provider-vault

Terraform Vault provider
Go
431
star
60

damon

A terminal UI (TUI) for HashiCorp Nomad
Go
427
star
61

nomad-autoscaler

Nomad Autoscaler brings autoscaling to your Nomad workloads.
Go
424
star
62

consul-helm

Helm chart to install Consul and other associated components.
Shell
422
star
63

terraform-provider-azuread

Terraform provider for Azure Active Directory
Go
419
star
64

vault-ssh-helper

Vault SSH Agent is used to enable one time keys and passwords
Go
404
star
65

terraform-provider-scaffolding

Quick start repository for creating a Terraform provider
Go
402
star
66

terraform-aws-consul

A Terraform Module for how to run Consul on AWS using Terraform and Packer
HCL
401
star
67

docker-consul

Official Docker images for Consul.
Dockerfile
399
star
68

nomad-pack

Go
393
star
69

hil

HIL is a small embedded language for string interpolations.
Go
392
star
70

vault-action

A GitHub Action that simplifies using HashiCorp Vaultâ„¢ secrets as build variables.
JavaScript
391
star
71

learn-terraform-provision-eks-cluster

HCL
389
star
72

terraform-plugin-sdk

Terraform Plugin SDK enables building plugins (providers) to manage any service providers or custom in-house solutions
Go
383
star
73

terraform-config-inspect

A helper library for shallow inspection of Terraform configurations
Go
380
star
74

hcl2

Former temporary home for experimental new version of HCL
Go
375
star
75

errwrap

Errwrap is a Go (golang) library for wrapping and querying errors.
Go
373
star
76

go-cleanhttp

Go
366
star
77

design-system

Helios Design System
TypeScript
358
star
78

logutils

Utilities for slightly better logging in Go (Golang).
Go
356
star
79

vault-ruby

The official Ruby client for HashiCorp's Vault
Ruby
336
star
80

vault-rails

A Rails plugin for easily integrating Vault secrets
Ruby
334
star
81

next-remote-watch

Decorated local server for next.js that enables reloads from remote data changes
JavaScript
325
star
82

waypoint-examples

Example Apps that can be deployed with Waypoint
PHP
325
star
83

go-hclog

A common logging package for HashiCorp tools
Go
319
star
84

vault-csi-provider

HashiCorp Vault Provider for Secret Store CSI Driver
Go
308
star
85

nomad-guides

Example usage of HashiCorp Nomad
HCL
281
star
86

consul-haproxy

Consul HAProxy connector for real-time configuration
Go
279
star
87

terraform-provider-google-beta

Terraform Provider for Google Cloud Platform (Beta)
Go
265
star
88

consul-esm

External service monitoring for Consul
Go
262
star
89

http-echo

A tiny go web server that echos what you start it with!
Makefile
257
star
90

terraform-provider-awscc

Terraform AWS Cloud Control provider
HCL
256
star
91

terraform-aws-nomad

A Terraform Module for how to run Nomad on AWS using Terraform and Packer
HCL
254
star
92

faas-nomad

OpenFaaS plugin for Nomad
Go
252
star
93

go-sockaddr

IP Address/UNIX Socket convenience functions for Go
Go
250
star
94

terraform-foundational-policies-library

Sentinel is a language and framework for policy built to be embedded in existing software to enable fine-grained, logic-based policy decisions. This repository contains a library of Sentinel policies, developed by HashiCorp, that can be consumed directly within the Terraform Cloud platform.
HCL
233
star
95

vagrant-vmware-desktop

Official provider for VMware desktop products: Fusion, Player, and Workstation.
Go
225
star
96

go-tfe

HCP Terraform/Enterprise API Client/SDK in Golang
Go
224
star
97

nomad-pack-community-registry

A repo for Packs written and maintained by Nomad community members
HCL
221
star
98

boundary-reference-architecture

Example reference architecture for a high availability Boundary deployment on AWS.
HCL
211
star
99

terraform-plugin-framework

A next-generation framework for building Terraform providers.
Go
204
star
100

vault-plugin-auth-kubernetes

Vault authentication plugin for Kubernetes Service Accounts
Go
192
star