• Stars
    star
    165
  • Rank 228,906 (Top 5 %)
  • Language
    Go
  • License
    Apache License 2.0
  • Created over 6 years ago
  • Updated 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Cloud instance type and price information as a service

CI Docker Helm chart Go Report Card license CII Best Practices

Cloud price and service information

The Banzai Cloud Cloudinfo application is a standalone project in the Pipeline ecosystem. While AWS, Google Cloud, Azure, AliBaba or Oracle all provide some kind of APIs to query instance type attributes and product pricing information, these APIs are often responding with partly inconsistent data, or the responses are very cumbersome to parse. The Cloudinfo service uses these cloud provider APIs to asynchronously fetch and parse instance type attributes and prices, while storing the results in an in memory cache and making it available as structured data through a REST API.

Quick start

Building the project is as simple as running a go build command. The result is a statically linked executable binary.

make build

The following options can be configured when starting the exporter (with defaults):

build/cloudinfo --help
Usage of Banzai Cloud Cloudinfo Service:
      --config-vault string               enable config Vault
      --config-vault-address string       config Vault address
      --config-vault-token string         config Vault token
      --config-vault-secret-path string   config Vault secret path
      --log-level string                  log level (default "info")
      --log-format string                 log format (default "json")
      --metrics-enabled                   internal metrics are exposed if enabled
      --metrics-address string            the address where internal metrics are exposed (default ":9090")
      --listen-address string             application listen address (default ":8000")
      --scrape                            enable cloud info scraping (default true)
      --scrape-interval duration          duration (in go syntax) between renewing information (default 24h0m0s)
      --provider-amazon                   enable amazon provider
      --provider-google                   enable google provider
      --provider-alibaba                  enable alibaba provider
      --provider-oracle                   enable oracle provider
      --provider-azure                    enable azure provider
      --provider-digitalocean             enable digitalocean provider
      --config string                     Configuration file
      --version                           Show version information
      --dump-config                       Dump configuration to the console (and exit)

Create a permanent developer configuration:

cp config.toml.dist config.toml

Running cloudinfo requires the web/ project to be built (requires Node.js to be installed):

cd web/
npm run build-prod
cd ..
build/cloudinfo

Cloud credentials

The cloudinfo service is querying the cloud provider APIs, so it needs credentials to access these.

AWS

Cloudinfo is using the AWS Price List API that allows a user to query product pricing in a fine-grained way. Authentication works through the standard AWS SDK for Go, so credentials can be configured via environment variables, shared credential files and via AWS instance profiles. To learn more about that read the Specifying Credentials section of the SDK docs.

The easiest way is through environment variables:

export AWS_ACCESS_KEY_ID=<access-key-id>
export AWS_SECRET_ACCESS_KEY=<secret-access-key>
cloudinfo --provider-amazon

Create AWS credentials with aws command-line tool:

aws iam create-user --user-name cloudinfo
aws iam put-user-policy --user-name cloudinfo --policy-name cloudinfo_policy --policy-document file://credentials/amazon_cloudinfo_role.json
aws iam create-access-key --user-name cloudinfo

Google Cloud

On Google Cloud the project is using two different APIs to collect the full product information: the Cloud Billing API and the Compute Engine API. Authentication to the Cloud Billing Catalog API is done through an API key that can be generated on the Google Cloud Console. Once you have an API key, billing is enabled for the project and the Cloud Billing API is also enabled you can start using the API.

The Compute Engine API is doing authentication in the standard Google Cloud way with service accounts instead of API keys. Once you have a service account, download the JSON credentials file from the Google Cloud Console, and set its account through an environment variable:

export GOOGLE_CREDENTIALS_FILE=<path-to-my-service-account-file>.json
export GOOGLE_PROJECT=<google-project-id>
cloudinfo --provider-google

Create service account key with gcloud command-line tool:

gcloud services enable container.googleapis.com compute.googleapis.com cloudbilling.googleapis.com cloudresourcemanager.googleapis.com
gcloud iam service-accounts create cloudinfoSA --display-name "Service account used for managing Cloudinfo”
gcloud iam roles create cloudinfo --project [PROJECT-ID] --title cloudinfo --description "cloudinfo roles" --permissions compute.machineTypes.list,compute.regions.list,compute.zones.list
gcloud projects add-iam-policy-binding [PROJECT-ID] --member='serviceAccount:cloudinfoSA@[PROJECT-ID].iam.gserviceaccount.com' --role='projects/[PROJECT-ID]/roles/cloudinfo'
gcloud iam service-accounts keys create cloudinfo.gcloud.json --iam-account=cloudinfoSA@[PROJECT-ID].iam.gserviceaccount.com

Azure

There are two different APIs used for Azure that provide machine type information and SKUs respectively. Pricing info can be queried through the Rate Card API. Machine types can be queried through the Compute API's list virtual machine sizes request. Authentication is done via standard Azure service principals.

Follow this link to learn how to generate one with the Azure SDK and set an environment variable that points to the service account file:

export AZURE_SUBSCRIPTION_ID=<subscription-id>
export AZURE_TENANT_ID=<tenant-id>
export AZURE_CLIENT_ID=<client-id>
export AZURE_CLIENT_SECRET=<client-secret>
cloudinfo --provider-azure

Create service principal with azure command-line tool:

cd credentials
az provider register --namespace Microsoft.Compute
az provider register --namespace Microsoft.Resources
az provider register --namespace Microsoft.ContainerService
az provider register --namespace Microsoft.Commerce
az role definition create --verbose --role-definition @azure_cloudinfo_role.json
az ad sp create-for-rbac --name "CloudinfoSP" --role "Cloudinfo" --sdk-auth true > azure_cloudinfo.auth

Oracle

Authentication is done via CLI configuration file. Follow this link to learn how to create such a file and set an environment variable that points to that config file:

export ORACLE_TENANCY_OCID=<tenancy-ocid>
export ORACLE_USER_OCID=<user-ocid>
export ORACLE_REGION=<region>
export ORACLE_FINGERPRINT=<fingerprint>
export ORACLE_PRIVATE_KEY=<private-key>
export ORACLE_PRIVATE_KEY_PASSPHRASE=<private-key-passphrase>
# OR
export ORACLE_CONFIG_FILE_PATH=<config-file-path>
export ORACLE_PROFILE=<profile>

cloudinfo --provider-oracle

Alibaba

The easiest way to authenticate is through environment variables:

export ALIBABA_ACCESS_KEY_ID=<access-key-id>
export ALIBABA_ACCESS_KEY_SECRET=<access-key-secret>
export ALIBABA_REGION_ID=<region-id>
cloudinfo --provider-alibaba

Create Alibaba credentials with Alibaba Cloud CLI:

aliyun ram CreateUser --UserName CloudInfo --DisplayName CloudInfo
aliyun ram AttachPolicyToUser --UserName CloudInfo --PolicyName AliyunECSReadOnlyAccess --PolicyType System
aliyun ram AttachPolicyToUser --UserName CloudInfo --PolicyName AliyunBSSReadOnlyAccess --PolicyType System
aliyun ram CreateAccessKey --UserName CloudInfo

DigitalOcean

export DIGITALOCEAN_ACCESS_TOKEN=<access-token>
cloudinfo --provider-digitalocean

Create a new API access token on DigitalOcean Console.

Configuring multiple providers

Cloud providers can be configured one by one. To configure multiple providers simply list all of them and configure the credentials for all of them. Here's an example of how to configure three providers:

export AWS_SECRET_ACCESS_KEY=<secret-access-key>
export AWS_ACCESS_KEY_ID=<access-key-id>
export ALIBABA_ACCESS_KEY_ID=<access-key-id>
export ALIBABA_ACCESS_KEY_SECRET=<access-key-secret>
export ALIBABA_REGION_ID=<region-id>
export DIGITALOCEAN_ACCESS_TOKEN=<access-token>

cloudinfo --provider-amazon --provider-alibaba --provider-digitalocean

API calls

For a complete OpenAPI 3.0 documentation, check out this URL.

Here's a few cURL examples to get started:

curl  -ksL -X GET "http://localhost:9090/api/v1/providers/azure/services/compute/regions/" | jq .
[
  {
    "id": "centralindia",
    "name": "Central India"
  },
  {
    "id": "koreacentral",
    "name": "Korea Central"
  },
  {
    "id": "southindia",
    "name": "South India"
  },
  ...
]
curl  -ksL -X GET "http://localhost:9090/api/v1/providers/amazon/services/compute/regions/eu-west-1/products" | jq .
{
  "products": [
    {
      "type": "i3.8xlarge",
      "onDemandPrice": 2.752,
      "cpusPerVm": 32,
      "memPerVm": 244,
      "gpusPerVm": 0,
      "ntwPerf": "10 Gigabit",
      "ntwPerfCategory": "high",
      "spotPrice": [
        {
          "zone": "eu-west-1c",
          "price": 1.6018
        },
        {
          "zone": "eu-west-1b",
          "price": 0.9563
        },
        {
          "zone": "eu-west-1a",
          "price": 2.752
        }
      ]
    },
    ...
  ]
}

FAQ

1. The API responses with status code 500 after starting the cloudinfo app and making a cURL request

After the cloudinfo app is started, it takes a few minutes to cache all the product information from the providers. Before the results are cached, responses may be unreliable. We're planning to solve it in the future. After a few minutes it should work fine.

2. Why is it needed to parse the product info asynchronously and periodically instead of relying on static data?

Cloud providers are releasing new instance types and regions quite frequently and also changing on-demand pricing from time to time. So it is necessary to keep this info up-to-date without needing to modify it manually every time something changes on the provider's side. After the initial query, the cloudinfo app will parse this info from the Cloud providers once per day. The frequency of this querying and caching is configurable with the --product-info-renewal-interval switch and is set to 24h by default.

3. What happens if the cloudinfo app cannot cache the AWS product info?

If caching fails, the cloudinfo app will try to reach the AWS Pricing List API on the fly when a request is sent (and it will also cache the resulting information).

4. What kind of AWS permissions do I need to use the project?

The cloudinfo app is querying the AWS Pricing API to keep up-to-date info about instance types, regions and on-demand pricing. You'll need IAM access as described here in example 11 of the AWS IAM docs.

If you don't use Prometheus to track spot instance pricing, you'll need to be able to access the spot price history from the AWS API as well with your IAM user. It means giving permission to ec2:DescribeSpotPriceHistory.

5. What is the advantage of using Prometheus to determine spot prices?

Prometheus is becoming the de-facto monitoring solution in the cloud native world, and it includes a time series database as well. When using the Banzai Cloud spot price exporter, spot price history will be collected as time series data and can be queried for averages, maximums and predictions. It gives a richer picture than relying on the current spot price that can be a spike, or on a downward or upward trend. You can fine tune your query (with the -prometheus-query switch) if you want to change the way spot instance prices are scored. By default the spot price averages of the last week are queried and instance types are sorted based on this score.

6. What happens if my Prometheus server cannot be reached or if it doesn't have the necessary spot price metrics?

If the cloudinfo app fails to reach the Prometheus query API, or it couldn't find proper metrics, it will fall back to querying the current spot prices from the AWS API.

License

Copyright (c) 2017-2019 Banzai Cloud, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

More Repositories

1

bank-vaults

A Vault swiss-army knife: a K8s operator, Go client with automatic token renewal, automatic configuration, multiple unseal options and more. A CLI tool to init, unseal and configure Vault (auth methods, secret engines). Direct secret injection into Pods.
Go
1,864
star
2

pipeline

Banzai Cloud Pipeline is a solution-oriented application platform which allows enterprises to develop, deploy and securely scale container-based applications in multi- and hybrid-cloud environments.
Go
1,494
star
3

logging-operator

Logging operator for Kubernetes based on Fluentd and Fluentbit
Go
1,027
star
4

koperator

Oh no! Yet another Apache Kafka operator for Kubernetes
Go
782
star
5

istio-operator

An operator that manages Istio deployments on Kubernetes
Go
534
star
6

banzai-charts

Curated list of Banzai Cloud Helm charts used by the Pipeline Platform
Mustache
367
star
7

thanos-operator

Kubernetes operator for deploying Thanos
Go
280
star
8

pke

PKE is an extremely simple CNCF certified Kubernetes installer and distribution, designed to work on any cloud, VM or bare metal.
Go
263
star
9

hpa-operator

Horizontal Pod Autoscaler operator for Kubernetes. Annotate and let the HPA operator do the rest.
Go
239
star
10

dast-operator

Dynamic Application and API Security Testing
Go
189
star
11

spark-metrics

Spark metrics related custom classes and sinks (e.g. Prometheus)
Scala
175
star
12

telescopes

Telescopes is a cloud instance types and full cluster layout recommender consisting of on-demand and spot/preemptible AWS EC2, Google, Azure, Oracle and Alibaba cloud instances.
Go
163
star
13

hollowtrees

A ruleset based watchguard to keep spot/preemptible instance based clusters safe, with plugins for VMs, Kubernetes, Prometheus and Pipeline
Go
155
star
14

kurun

Run main.go in Kubernetes with one command, also port-forward your app into Kubernetes
Go
132
star
15

jwt-to-rbac

JWT-to-RBAC lets you automatically generate RBAC resources based on JWT tokens
Go
113
star
16

service-tools

Prepare your Node.js application for production
TypeScript
99
star
17

nodepool-labels-operator

Nodepool Labels operator for Kubernetes
Go
69
star
18

drone-kaniko

A thin shim-wrapper around the official Google Kaniko Docker image to make it behave like the Drone Docker plugin.
Shell
56
star
19

pvc-operator

Go
51
star
20

satellite

Determine your cloud provider with a simple HTTP call
Go
50
star
21

prometheus-jmx-exporter-operator

Go
45
star
22

chartsec

Helm Chart security scanner
Go
45
star
23

anchore-image-validator

Anchore Image Validator lets you automatically detect or block security issues just before a Kubernetes pod starts.
Go
44
star
24

spot-price-exporter

Prometheus exporter to track spot price history
Go
41
star
25

logrus-runtime-formatter

Golang runtime package based automatic function, line and package fields for Logrus
Go
40
star
26

kubeconfiger

Example tool for cleaning up untrusted kubeconfig files
Go
36
star
27

spot-termination-exporter

Prometheus spot instance exporter to monitor AWS instance termination with Hollowtrees
Go
36
star
28

imps

ImagePullSecrets controller allows you to distribute image pull secrets based on namespace/pod matches.
Go
32
star
29

banzai-cli

CLI for Banzai Cloud Pipeline platform
Go
30
star
30

allspark

AllSpark is a simple building block for quickly building web microservice deployments for demo purposes.
Go
26
star
31

istio-client-go

Golang API representation for Istio resources
Go
23
star
32

istio-external-demo

Working example for restricting access to external services from an Istio service mesh
23
star
33

k8s-cncf-meetup

Kubernetes and Cloud Native Computing Meetup slides
DIGITAL Command Language
19
star
34

spot-config-webhook

A Kubernetes mutating admission webhook that sets the scheduler of specific pods based on a ConfigMap.
Go
19
star
35

aws-billing-alarm

Shell
15
star
36

aws-autoscaling-exporter

Prometheus exporter with AWS auto scaling group and instance level metrics for Hollowtrees
Go
14
star
37

docker-cruise-control

Linkedin's Cruise Control container image built for Koperator (https://github.com/banzaicloud/koperator)
Shell
13
star
38

lambda-slack-bot

AWS Lambda Golang Slack bot to list running EC2 instances
Go
12
star
39

fluent-plugin-label-router

Fluentd plugin to route records based on Kubernetes labels and namespace
Ruby
11
star
40

preemption-exporter

Prometheus preemptible instance exporter to monitor GCP instance termination with Hollowtrees
Go
11
star
41

docker-kafka

Dockerfile to building Docker image for Apache Kafka
Dockerfile
10
star
42

ht-k8s-action-plugin

Hollowtrees plugin used to interact with Kubernetes on specific event triggers
Go
10
star
43

crd-updater

Helm 3 library chart that emulates Helm 2 CRD update behavior.
Go
10
star
44

jmx-exporter-loader

Java
10
star
45

go-code-generation-demo

Go
9
star
46

bank-vaults-workshop

Material for the Hacktivity 2019 - Bank-Vaults workshop
7
star
47

pipeline-cp-launcher

Pipeline ControlPlane launcher using AWS Cloudformation or Azure ARM template, running on Kubernetes
Makefile
7
star
48

go-cruise-control

It's client library written in Golang for interacting with Linkedin Cruise Control using its HTTP API.
Go
6
star
49

zeppelin-pdi-example

6
star
50

bank-vaults-docs

Bank-Vaults documentation
Shell
5
star
51

logging-operator-docs

Logging operator documentation
Shell
4
star
52

koperator-docs

Documentation for Koperator - the operator for managing Apache Kafka on Kubernetes
HTML
4
star
53

banzai-types

Common types, configs and utils used across several Banzai Cloud projects
Go
4
star
54

circleci-orbs

Makefile
3
star
55

ht-aws-asg-action-plugin

Hollowtrees plugin to detach instances from auto scaling groups
Go
3
star
56

drone-plugin-sonar

Drone plugin for static code analysis
Go
3
star
57

fluent-plugin-tag-normaliser

Fluent output plugin to transform tags based on record content
Ruby
2
star
58

cicd-go

A Go client for the Pipeline CI/CD subsystem
Go
2
star
59

gin-utilz

Gin framework utilities
Go
2
star
60

banzailint

Custom lint rules for Banzai Cloud code
Makefile
2
star
61

docker-jmx-exporter

Docker image for JMX-exporter
Dockerfile
2
star
62

.github

Github default files
1
star
63

developer-guide

Guide for developers on writing code and maintaining projects
1
star
64

cadence-aws-sdk

Cadence wrapper around the AWS Go SDK to make working with the AWS API easier in Cadence workflows
Go
1
star
65

drone-plugin-k8s-client

Drone plugin implementation for k8s operations
Go
1
star
66

thanos-operator-docs

Thanos operator documentation
Shell
1
star
67

dynamic-class-gen

Java
1
star
68

custom-runner

go custom runner
Go
1
star
69

cluster-registry

Go
1
star
70

log-socket

Service and CLI tool for forwarding logs through a WebSocket connection
Go
1
star
71

kube-service-annotate

A Kubernetes mutating webhook to annotate services based on rules
Go
1
star
72

pipeline-sdk

SDK for extending Pipeline
Go
1
star
73

integrated-service-sdk

Client SDK for the Integrated Service Operator
Go
1
star
74

pipeline-cp-images

Pipeline ControlPlane images
Shell
1
star
75

dockerized-newman

Automated end-2-end testing with Postman in Docker
1
star
76

drone-plugin-zeppelin-client

Zeppelin REST API client plugin for Drone. A step in the Pipeline PaaS CI/CD component to provision a Kubernetes cluster or use a managed one
Go
1
star