• This repository has been archived on 18/Mar/2021
  • Stars
    star
    252
  • Rank 160,109 (Top 4 %)
  • Language
    Go
  • License
    MIT License
  • Created about 7 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

OpenFaaS plugin for Nomad

Build Status Docker Repository on Quay OpenFaaS Maintainability Test Coverage

OpenFaaS - Nomad Provider

This repository contains the OpenFaaS provider for the Nomad scheduler. OpenFaaS allows you to run your private functions as a service. Functions are packaged in Docker Containers which enables you to work in any language and also interact with any software which can also be installed in the container.

OpenFaaS Architecture

For the simplest installation, only two containers need to run on the Nomad cluster:

  1. OpenFaaS Gateway
  2. OpenFaaS Nomad provider However for OpenFaaS to automatically scale your function based on inbound requests and to be able to gather metrics from the Nomad provider two additional containers are optionally run.
  3. Prometheus DB
  4. StatsD server for Prometheus
  5. Grafana for querying Prometheus data

OpenFaaS Gateway

The gateway provides a common API which is used by the command line of for deploying functions. In addition to this it also hosts a Prometheus metrics endpoint which is used to provide operational metrics. The gateway does not itself interact with the Nomad cluster instead it delegates all requests to the Nomad provider.

Nomad provider

The Nomad provider is responsible for performing any actions on the Nomad server such as deploying new functions or scaling functions. Also it also acts as a function proxy. Metrics such as execution duration and other information are emitted by the proxy and captured by the StatsD server. Prometheus regularly collects this information from the StatsD server and stores it as time series data.

OpenFaaS Functions

The functions performing the work on OpenFaaS are packaged as Docker images. When running on the cluster these functions do not provide any external interface; instead, interactions are performed through the Nomad provider. When a function is deployed, it is registered with Consul's service catalog. The provider uses this service catalog for service discovery to be able to locate and call the downstream function.

Starting a local Nomad / Consul environment

First, ensure that you have a recent version of Nomad and Consul installed, the latest versions can be found at:
Consul Versions | HashiCorp Releases
Nomad Versions | HashiCorp Releases

Make sure you download the correct architecture for your machine, binaries are available for most platforms, Mac, Windows, Linux, Arm, etc.

To get things up and running quickly you can run the bash script located in the root of this repository.

$ source ./startNomad.sh                                                                                                                                              
Discovered IP Address: 192.168.1.113                                                                                                                           
Starting Consul, redirecting logs to /Users/nicj/log/consul.log                                                                                                
Starting Nomad, redirecting logs to /Users/nicj/log/nomad.log                                                                                                  
NOMAD Running 

The startup script will set the advertised address to your primary local IP address and run both Nomad and Consul in the background redirecting the logs to your home folder.

Using Vagrant for Local Development

Vagrant is a tool for provisioning dev environments. The Vagrantfile governs the Vagrant configuration:

  1. Install Vagrant via download links or package manager
  2. Install VirtualBox via download links or preferred hypervisor of your choice (vagrant plugins may be required). VMWare Fusion is supported.
  3. vagrant up (default VirtualBox) or vagrant up --provider vmware_fusion

The provisioners install Docker, Nomad, Consul, and Vault (via Saltstack) then launch OpenFaaS components with Nomad. If successful, the following services will be available over the private network (192.168.50.2):

  • Nomad (v0.8.4) 192.168.50.2:4646
  • Consul (v1.2.0) 192.168.50.2:8500
  • Vault (v0.9.6) 192.168.50.2:8200
  • FaaS Gateway (0.9.14) 192.168.50.2:8080

This setup is intended to streamline local development of the faas-nomad provider with a more complete setup of the hashicorp ecosystem. Therefore, it is assumed that the faas-nomad source code is located on your workstation, and or is configured to listen on 0.0.0.0:8080 when debugging/running the Go process.

Starting a remote Nomad / Consul environment

If you would like to test OpenFaaS running on a cluster in AWS, a Terraform module and instructions can be found here: faas-nomad/terraform at master · hashicorp/faas-nomad · GitHub

Regardless of which method you use interacting with OpenFaaS is the same.

Running the OpenFaaS application

First, we need to start the OpenFaaS application, to do this there are two job files located in the folder nomad_job_files which set things up using sensible defaults. To run the main application execute the following command:

$ nomad run ./nomad_job_files/faas.hcl

==> Monitoring evaluation "a3e54faa"
    Evaluation triggered by job "faas-nomadd"
    Allocation "28f60a54" created: node "867c6baa", group "faas-nomadd"
    Allocation "7223b65d" created: node "d196a533", group "faas-nomadd"
    Allocation "a4dbae6c" created: node "123e18c0", group "faas-nomadd"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "a3e54faa" finished with status "complete"

This job will start an instance of the OpenFaaS gateway and the Nomad provider on every node in the cluster.

We can then launch the monitoring job to start Prometheus and Grafana:

$ nomad run ./nomad_job_files/monitoring.hcl

==> Monitoring evaluation "7d9c46df"
    Evaluation triggered by job "faas-monitoring"
    Allocation "e20ace08" created: node "123e18c0", group "faas-monitoring"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "7d9c46df" finished with status "complete"

This job starts a single instance of Prometheus and Grafana on the Nomad cluster.

Setting up Grafana to view application metrics

If you are not using the provided Terraform module, you will need to locate the node which Grafana is running on, assuming you have not changed the job file the port will be 3000. To log into Grafana use the default username and password admin.

Once you have successfully logged in, the next step is to create a data source for the Prometheus server.

Configure the options as shown ensuring that the URL points to the location of your Prometheus server. The next step is to add a dashboard to view the data from the OpenFaaS gateway and provider. A simple dash can be found at grafana\faas-dashboard.json, let's add this to Grafana. Clicking the Import button from the Dashboards menu will pop up a box like the one below. Choose the file for the example dashboard and press import.

Assuming all went well, you should now see the dashboard in Grafana:

Creating and deploying a function

To create functions, we can install the faas-cli tool, to get the CLI tool you can run the following command:

$ curl -sL https://cli.openfaas.com | sudo sh

Alternately if you are using a Mac, the cli is also available via brew install faas-cli

Creating a new function

Changing to a new folder we can create a new function by running the following command in the CLI:

$ faas-cli new gofunction -lang go
#...
2017/11/17 11:35:49 Cleaning up zip file...

Folder: gofunction created.
  ___                   _____           ____
 / _ \ _ __   ___ _ __ |  ___|_ _  __ _/ ___|
| | | | '_ \ / _ \ '_ \| |_ / _` |/ _` \___ \
| |_| | |_) |  __/ | | |  _| (_| | (_| |___) |
 \___/| .__/ \___|_| |_|_|  \__,_|\__,_|____/
      |_|


Function created in folder: gofunction
Stack file written: gofunction.yml

The command will create two folders and one file in the current directory:

$ tree -L 1  
.
├── gofunction
├── gofunction.yml
└── template

2 directories, 1 file

The gofunction folder is where the source code for your application will live by default there is the main entry point called handler.go:

package function

import (
    "fmt"
)

// Handle a serverless request
func Handle(req []byte) string {
    return fmt.Sprintf("Hello, Go. You said: %s", string(req))
}

The Handle method receives the payload sent by calling the function as a slice of bytes and expects any output to be returned as a string. For now, let's keep this function the same and run through the steps for building the function. The first thing we need to do is to edit the gofunction.yml. file and change the image name so that we can push this to a Docker repo that our Nomad cluster will be able to pull. Also, change the gateway address to the location of your OpenFaaS gateway. Changing the gateway in this file saves us providing the location as an alternate parameter.

provider:
  name: faas
  gateway: http://localhost:8080

functions:
  gofunction:
    lang: go
    handler: ./gofunction
    image: nicholasjackson/gofunction

Building our new function

Next step is to build the function; we can do this with the faas-cli build command:

$ faas-cli build -yaml gofunction.yml 
#...
Step 16/17 : ENV fprocess "./handler"
 ---> Using cache
 ---> 5e39e4e30c60
Step 17/17 : CMD ./fwatchdog
 ---> Using cache
 ---> 2ae72de493b7
Successfully built 2ae72de493b7
Successfully tagged nicholasjackson/gofunction:latest
Image: gofunction built.
[0] < Builder done.

The build command execute the Docker build command with the correct Dockerfile for your language. All code is compiled inside of the container as a multi-stage build before being packaged into an Image.

Pushing the function to the Docker repository

We can either use the faas-cli push command to push this to a Docker repo, or we can manually push.

$ faas-cli push -yaml gofunction.yml 
[0] > Pushing: gofunction.
The push refers to a repository [docker.io/nicholasjackson/gofunction]
cc9df684d32a: Pushed 
4e12ae9c1d69: Pushed 
cdcffb5144dd: Pushed 
10d64a26ddb0: Pushed 
dbbae7ea208f: Pushed 
2aebd096e0e2: Pushed 
latest: digest: sha256:57c0143772a1e6f585de019022203b8a9108c2df02ff54d610b7252ec4681886 size: 1574
[0] < Pushing done.

Deploying the function

To deploy the function we can again use the faas-cli tool to deploy the function to our Nomad cluster:

$ faas-cli deploy -yaml gofunction.yml
Deploying: gofunction.
Removing old function.
Deployed.
URL: http://192.168.1.113:8080/function/gofunction

200 OK

If you run the nomad status command, you will now see the additional job running on your Nomad cluster.

$ nomad status
ID                   Type     Priority  Status   Submit Date
OpenFaaS-gofunction  service  1         running  11/17/17 11:52:59 GMT
faas-monitoring      service  50        running  11/15/17 14:43:11 GMT
faas-nomadd          system   50        running  11/15/17 11:00:31 GMT

Running the function

To run the function, we can simply curl the OpenFaaS gateway and pass our payload as a string:

$ curl http://192.168.1.113:8080/function/gofunction -d 'Nic'
Hello, Go. You said: Nic

or you can use the cli

$ echo "Nic" | faas-cli --gateway http://192.168.1.113:8080/ invoke gofunction

That is all there is to it, checkout the OpenFaaS community page for some inspiration and other demos. faas/community.md at master · openfaas/faas · GitHub

Datacenters and Limits

By default, the Nomad provider will use the datacenter of the Nomad agent or dc1. This can be overridden by setting one or more constraints datacenter == value. Limits for CPU and memory can also be set memory is an integer representing Megabytes, cpu is an integer representing MHz of CPU where 1024 equals one core.

i.e.

$ faas-cli deploy --constraint 'datacenter == dc1' --constraint 'datacenter == dc2'

or from a stack file...

functions:
  facedetect:
    lang: go-opencv
    handler: ./facedetect
    image: nicholasjackson/func_facedetect
    limits:
      memory: 512
      cpu: 1000
    constraints:
      "datacenter == test1"

Nomad Job Constraints

Additionally to the datacenter constraint Nomad job constraints are supported.

i.e.

$ faas-cli deploy --constraint '${attr.cpu.arch} = arm'

For compatibility and convenience the interpolation notation (${}) can be left out and == instead of = is supported.

All provided constraints are applied to the job (not the group or the task). Leaving out the a field (.e.g. ${meta.foo} is_set) or using more than one operator (e.g. ${meta.foo} is_set = bar) is currently not supported.

Annotations

Metadata can be added to the Nomad job definition through the use of the OpenFaaS annotation config. The below example would add the key git to the Meta section of nomad job definition which can be accessed through the API.

functions:
  facedetect:
    lang: go-opencv
    handler: ./facedetect
    image: nicholasjackson/func_facedetect
    annotations:
      git: https://github.com/alexellis/super-pancake-fn.git

Secrets API

It is possible to integrate Vault secrets https://docs.openfaas.com/reference/secrets/ with the Nomad provider. Follow these steps to have OpenFaaS integrate with Nomad + Vault:

  1. First, we need to enable the approle auth backend in Vault:

    vault auth enable approle

  2. We also need to create a policy for faas-nomad and OpenFaaS functions:

    vault policy write openfaas policy.hcl

    Policy file example: https://raw.githubusercontent.com/hashicorp/faas-nomad/master/provisioning/scripts/policy.hcl

    It is important that the policy contain: create, update, delete and list capabilities that match your secret backend prefix. In this case, path secret/openfaas/* will work with the default configuration.

    Also, faas-nomad takes care of renewing it's own auth token, so we need to make sure the policy uses path "auth/token/renew-self" and has the "update" capability.

  3. Finally, let's setup the approle itself:

    curl -i \
      --header "X-Vault-Token: ${VAULT_TOKEN}" \
      --request POST \
      --data '{"policies": ["openfaas"], "period": "24h"}' \
      https://${VAULT_HOST}/v1/auth/approle/role/openfaas

    This creates the role attached to the policy we just created. The "period" property and duration is important for renewing long-running service Vault tokens.

    curl -i \
      --header "X-Vault-Token: ${VAULT_TOKEN}" \
      https://${VAULT_HOST}/v1/auth/approle/role/openfaas/role-id

    Produces the role_id needed for -vault_app_role_id cli argument.

    curl -i \
      --header "X-Vault-Token: ${VAULT_TOKEN}" \
      --request POST \
      https://VAULT_HOST}/v1/auth/approle/role/openfaas/secret-id

    Produces the secret_id needed for -vault_app_secret_id cli argument.

Let's assume the Vault parameters have been populated, and you're now running faas-nomad along with the other OpenFaaS components. Now, try out the new faas-cli secret commands:

faas-cli secret create grafana_api_token --from-literal=foo --gateway ${FAAS_GATEWAY}

Now we can use our newly created secret "grafana_api_token" in a new function we want to deploy:

faas-cli deploy --image acornies/grafana-annotate --secret grafana_api_token --env grafana_url=http://grafana.service.consul:3000

Async functions

OpenFaaS has the capability to immediately return when you call a function and add the work to a nats streaming queue. To enable this feature in addition to the OpenFaaS gateway and Nomad provider you must run a nats streaming server.
To run the server please use the nats.hcl job file.

$ nomad run ./nomad_job_files/nats.hcl

You can then invoke a function using the async-function API, the call will be immediately retuned and OpenFaaS will queue your work for later execution.

curl -d '{...}' http://gateway:8080/async-function/{function_name}

Configuration and Function timeouts

By Default a function is allowed to run for 30s before it is terminated, should you require longer running functions timeout is configurable by setting the flag -function_timeout on the Nomad provider e.g:

 args = [
   "-nomad_region", "${NOMAD_REGION}",
   "-nomad_addr", "${NOMAD_IP_http}:4646",
   "-consul_addr", "${NOMAD_IP_http}:8500",
   "-statsd_addr", "${NOMAD_ADDR_statsd_statsd}",
   "-node_addr", "${NOMAD_IP_http}",
   "-logger_format", "json",
   "-logger_output", "/logs/nomadd.log"
   "-function_timeout", "5m"
]

This would set the timeout to 5m for a function.

Contributing

The application including docker containers is built using goreleaser https://goreleaser.com.

Setup

  • Clone this repo: go get github.com/hashicorp/faas-nomad
  • Create a fork in your own github account
  • Add a new git remote to $GOPATH/src/hashicorp/faas-nomad with your fork git remote add fork [email protected]:/yourname/faas-nomad.git

Building the application

make build_all this runs the command goreleaser -snapshot -rm-dist -skip-validate

Testing the application

make test runs all unit tests in the application, for continuous test running try http://goconvey.co

More Repositories

1

terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
Go
42,052
star
2

vault

A tool for secrets management, encryption as a service, and privileged access management
Go
30,663
star
3

consul

Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure.
Go
28,206
star
4

vagrant

Vagrant is a tool for building and distributing development environments.
Ruby
26,132
star
5

packer

Packer is a tool for creating identical machine images for multiple platforms from a single source configuration.
Go
15,018
star
6

nomad

Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.
Go
14,741
star
7

terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
Go
9,711
star
8

raft

Golang implementation of the Raft consensus protocol
Go
7,383
star
9

serf

Service orchestration and management tool.
Go
5,692
star
10

hcl

HCL is the HashiCorp configuration language.
Go
5,225
star
11

go-plugin

Golang plugin system over RPC.
Go
4,874
star
12

terraform-cdk

Define infrastructure resources using programming constructs and provision them using HashiCorp Terraform
TypeScript
4,801
star
13

waypoint

A tool to build, deploy, and release any application on any platform.
Go
4,789
star
14

consul-template

Template rendering, notifier, and supervisor for @HashiCorp Consul and Vault data.
Go
4,750
star
15

terraform-provider-azurerm

Terraform provider for Azure Resource Manager
Go
4,498
star
16

otto

Development and deployment made easy.
HTML
4,282
star
17

golang-lru

Golang LRU cache
Go
4,221
star
18

boundary

Boundary enables identity-based access management for dynamic infrastructure.
Go
3,829
star
19

memberlist

Golang package for gossip based membership and failure detection
Go
3,303
star
20

go-memdb

Golang in-memory database built on immutable radix trees
Go
2,937
star
21

next-mdx-remote

Load mdx content from anywhere through getStaticProps in next.js
TypeScript
2,405
star
22

terraform-provider-google

Terraform Provider for Google Cloud Platform
Go
2,267
star
23

go-multierror

A Go (golang) package for representing a list of errors as a single error.
Go
2,029
star
24

yamux

Golang connection multiplexing library
Go
2,003
star
25

envconsul

Launch a subprocess with environment variables using data from @HashiCorp Consul and Vault.
Go
1,967
star
26

go-retryablehttp

Retryable HTTP client in Go
Go
1,702
star
27

terraform-provider-kubernetes

Terraform Kubernetes provider
Go
1,571
star
28

go-getter

Package for downloading things from a string URL using a variety of protocols.
Go
1,541
star
29

best-practices

HCL
1,490
star
30

go-version

A Go (golang) library for parsing and verifying versions and version constraints.
Go
1,459
star
31

go-metrics

A Golang library for exporting performance and runtime metrics to external metrics systems (i.e. statsite, statsd)
Go
1,431
star
32

terraform-guides

Example usage of HashiCorp Terraform
HCL
1,324
star
33

setup-terraform

Sets up Terraform CLI in your GitHub Actions workflow.
JavaScript
1,303
star
34

mdns

Simple mDNS client/server library in Golang
Go
1,020
star
35

terraform-provider-helm

Terraform Helm provider
Go
990
star
36

vault-guides

Example usage of HashiCorp Vault secrets management
Shell
990
star
37

go-immutable-radix

An immutable radix tree implementation in Golang
Go
926
star
38

vault-helm

Helm chart to install Vault and other associated components.
Shell
904
star
39

terraform-ls

Terraform Language Server
Go
896
star
40

vscode-terraform

HashiCorp Terraform VSCode extension
TypeScript
870
star
41

levant

An open source templating and deployment tool for HashiCorp Nomad jobs
Go
825
star
42

vault-k8s

First-class support for Vault and Kubernetes.
Go
697
star
43

consul-k8s

First-class support for Consul Service Mesh on Kubernetes
Go
665
star
44

terraform-aws-vault

A Terraform Module for how to run Vault on AWS using Terraform and Packer
HCL
654
star
45

terraform-exec

Terraform CLI commands via Go.
Go
652
star
46

terraform-github-actions

Terraform GitHub Actions
Shell
624
star
47

terraform-provider-vsphere

Terraform Provider for VMware vSphere
Go
610
star
48

raft-boltdb

Raft backend implementation using BoltDB
Go
585
star
49

nextjs-bundle-analysis

A github action that provides detailed bundle analysis on PRs for next.js apps
JavaScript
562
star
50

go-discover

Discover nodes in cloud environments
Go
556
star
51

consul-replicate

Consul cross-DC KV replication daemon.
Go
504
star
52

next-mdx-enhanced

A Next.js plugin that enables MDX pages, layouts, and front matter
JavaScript
500
star
53

terraform-provider-kubernetes-alpha

A Terraform provider for Kubernetes that uses dynamic resource types and server-side apply. Supports all Kubernetes resources.
Go
493
star
54

docker-vault

Official Docker images for Vault
Shell
492
star
55

terraform-k8s

Terraform Cloud Operator for Kubernetes
Go
454
star
56

vault-secrets-operator

The Vault Secrets Operator (VSO) allows Pods to consume Vault secrets natively from Kubernetes Secrets.
Go
450
star
57

puppet-bootstrap

A collection of single-file scripts to bootstrap your machines with Puppet.
Shell
444
star
58

cap

A collection of authentication Go packages related to OIDC, JWKs, Distributed Claims, LDAP
Go
436
star
59

terraform-provider-vault

Terraform Vault provider
Go
431
star
60

damon

A terminal UI (TUI) for HashiCorp Nomad
Go
427
star
61

nomad-autoscaler

Nomad Autoscaler brings autoscaling to your Nomad workloads.
Go
424
star
62

consul-helm

Helm chart to install Consul and other associated components.
Shell
422
star
63

terraform-provider-azuread

Terraform provider for Azure Active Directory
Go
419
star
64

vault-ssh-helper

Vault SSH Agent is used to enable one time keys and passwords
Go
404
star
65

terraform-provider-scaffolding

Quick start repository for creating a Terraform provider
Go
402
star
66

terraform-aws-consul

A Terraform Module for how to run Consul on AWS using Terraform and Packer
HCL
399
star
67

docker-consul

Official Docker images for Consul.
Dockerfile
399
star
68

hil

HIL is a small embedded language for string interpolations.
Go
392
star
69

vault-action

A GitHub Action that simplifies using HashiCorp Vaultâ„¢ secrets as build variables.
JavaScript
391
star
70

nomad-pack

Go
390
star
71

learn-terraform-provision-eks-cluster

HCL
389
star
72

terraform-plugin-sdk

Terraform Plugin SDK enables building plugins (providers) to manage any service providers or custom in-house solutions
Go
383
star
73

terraform-config-inspect

A helper library for shallow inspection of Terraform configurations
Go
380
star
74

hcl2

Former temporary home for experimental new version of HCL
Go
375
star
75

errwrap

Errwrap is a Go (golang) library for wrapping and querying errors.
Go
373
star
76

go-cleanhttp

Go
362
star
77

design-system

Helios Design System
TypeScript
358
star
78

logutils

Utilities for slightly better logging in Go (Golang).
Go
356
star
79

vault-ruby

The official Ruby client for HashiCorp's Vault
Ruby
336
star
80

vault-rails

A Rails plugin for easily integrating Vault secrets
Ruby
334
star
81

waypoint-examples

Example Apps that can be deployed with Waypoint
PHP
326
star
82

next-remote-watch

Decorated local server for next.js that enables reloads from remote data changes
JavaScript
325
star
83

go-hclog

A common logging package for HashiCorp tools
Go
311
star
84

vault-csi-provider

HashiCorp Vault Provider for Secret Store CSI Driver
Go
305
star
85

nomad-guides

Example usage of HashiCorp Nomad
HCL
281
star
86

consul-haproxy

Consul HAProxy connector for real-time configuration
Go
279
star
87

consul-esm

External service monitoring for Consul
Go
262
star
88

terraform-provider-google-beta

Terraform Provider for Google Cloud Platform (Beta)
Go
261
star
89

http-echo

A tiny go web server that echos what you start it with!
Makefile
257
star
90

terraform-aws-nomad

A Terraform Module for how to run Nomad on AWS using Terraform and Packer
HCL
253
star
91

go-sockaddr

IP Address/UNIX Socket convenience functions for Go
Go
250
star
92

terraform-provider-awscc

Terraform AWS Cloud Control provider
HCL
247
star
93

terraform-foundational-policies-library

Sentinel is a language and framework for policy built to be embedded in existing software to enable fine-grained, logic-based policy decisions. This repository contains a library of Sentinel policies, developed by HashiCorp, that can be consumed directly within the Terraform Cloud platform.
HCL
233
star
94

nomad-driver-podman

A nomad task driver plugin for sandboxing workloads in podman containers
Go
226
star
95

vagrant-vmware-desktop

Official provider for VMware desktop products: Fusion, Player, and Workstation.
Go
225
star
96

go-tfe

HCP Terraform/Enterprise API Client/SDK in Golang
Go
224
star
97

nomad-pack-community-registry

A repo for Packs written and maintained by Nomad community members
HCL
215
star
98

boundary-reference-architecture

Example reference architecture for a high availability Boundary deployment on AWS.
HCL
211
star
99

terraform-plugin-framework

A next-generation framework for building Terraform providers.
Go
204
star
100

vault-plugin-auth-kubernetes

Vault authentication plugin for Kubernetes Service Accounts
Go
192
star