• Stars
    star
    100
  • Rank 338,857 (Top 7 %)
  • Language HCL
  • Created about 3 years ago
  • Updated 7 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Terraform 1.x AWS Course

GitHub release (latest SemVer)

CloudAcademy Terraform 1.x AWS Course

This repo contains example Terraform configurations for building AWS infrastructure.

AWS Exercises

The exercises directory contains 4 different AWS infrastructure provisioning exercises.

Exercise 1

Create a simple AWS VPC spanning 2 AZs. Public subnets will be created, together with an internet gateway, and single route table. A t3.micro instance will be deployed and installed with Nginx for web serving. Security groups will be created and deployed to secure all network traffic between the various components.

https://github.com/cloudacademy/terraform-aws/tree/main/exercises/exercise1

AWS Architecture

Project Structure

├── main.tf
├── outputs.tf
├── terraform.tfvars
└── variables.tf

TF Variable Notes

  • workstation_ip: The Terraform variable workstation_ip represents your workstation's external perimeter public IP address, and needs to be represented using CIDR notation. This IP address is used later on within the Terraform infrastructure provisioning process to lock down SSH access on the instance(s) (provisioned by Terraform) - this is a security safety measure to prevent anyone else from attempting SSH access. The public IP address will be different and unique for each user - the easiest way to get this address is to type "what is my ip address" in a google search. As an example response, lets say Google responded with 202.10.23.16 - then the value assigned to the Terraform workstation_ip variable would be 202.10.23.16/32 (note the /32 is this case indicates that it is a single IP address).

  • key_name: The Terraform variable key_name represents the AWS SSH Keypair name that will be used to allow SSH access to the Bastion Host that gets created at provisioning time. If you intend to use the Bastion Host - then you will need to create your own SSH Keypair (typically done within the AWS EC2 console) ahead of time.

    • The required Terraform workstation_ip and key_name variables can be established multiple ways, one of which is to prefix the variable name with TF_VAR_ and have it then set as an environment variable within your shell, something like:

    • Linux: export TF_VAR_workstation_ip=202.10.23.16/32 and export TF_VAR_key_name=your_ssh_key_name

    • Windows: set TF_VAR_workstation_ip=202.10.23.16/32 and set TF_VAR_key_name=your_ssh_key_name

  • Terraform environment variables are documented here: https://www.terraform.io/cli/config/environment-variables

Exercise 2

Create an advanced AWS VPC spanning 2 AZs with both public and private subnets. An internet gateway and NAT gateway will be deployed into it. Public and private route tables will be established. An application load balancer (ALB) will be installed which will load balance traffic across an auto scaling group (ASG) of Nginx web servers. Security groups will be created and deployed to secure all network traffic between the various components.

https://github.com/cloudacademy/terraform-aws/tree/main/exercises/exercise2

AWS Architecture

Project Structure

├── ec2.userdata
├── main.tf
├── outputs.tf
├── terraform.tfvars
└── variables.tf

TF Variable Notes

  • workstation_ip: The Terraform variable workstation_ip represents your workstation's external perimeter public IP address, and needs to be represented using CIDR notation. This IP address is used later on within the Terraform infrastructure provisioning process to lock down SSH access on the instance(s) (provisioned by Terraform) - this is a security safety measure to prevent anyone else from attempting SSH access. The public IP address will be different and unique for each user - the easiest way to get this address is to type "what is my ip address" in a google search. As an example response, lets say Google responded with 202.10.23.16 - then the value assigned to the Terraform workstation_ip variable would be 202.10.23.16/32 (note the /32 is this case indicates that it is a single IP address).

  • key_name: The Terraform variable key_name represents the AWS SSH Keypair name that will be used to allow SSH access to the Bastion Host that gets created at provisioning time. If you intend to use the Bastion Host - then you will need to create your own SSH Keypair (typically done within the AWS EC2 console) ahead of time.

    • The required Terraform workstation_ip and key_name variables can be established multiple ways, one of which is to prefix the variable name with TF_VAR_ and have it then set as an environment variable within your shell, something like:

    • Linux: export TF_VAR_workstation_ip=202.10.23.16/32 and export TF_VAR_key_name=your_ssh_key_name

    • Windows: set TF_VAR_workstation_ip=202.10.23.16/32 and set TF_VAR_key_name=your_ssh_key_name

  • Terraform environment variables are documented here: https://www.terraform.io/cli/config/environment-variables

Exercise 3

Same AWS architecture as used in Exercise 2. This exercise demonstrates a different Terraform technique, using the Terraform "count" meta argument, for configuring the public and private subnets as well as their respective route tables.

https://github.com/cloudacademy/terraform-aws/tree/main/exercises/exercise3

AWS Architecture

Project Structure

├── ec2.userdata
├── main.tf
├── outputs.tf
├── terraform.tfvars
└── variables.tf

TF Variable Notes

  • workstation_ip: The Terraform variable workstation_ip represents your workstation's external perimeter public IP address, and needs to be represented using CIDR notation. This IP address is used later on within the Terraform infrastructure provisioning process to lock down SSH access on the instance(s) (provisioned by Terraform) - this is a security safety measure to prevent anyone else from attempting SSH access. The public IP address will be different and unique for each user - the easiest way to get this address is to type "what is my ip address" in a google search. As an example response, lets say Google responded with 202.10.23.16 - then the value assigned to the Terraform workstation_ip variable would be 202.10.23.16/32 (note the /32 is this case indicates that it is a single IP address).

  • key_name: The Terraform variable key_name represents the AWS SSH Keypair name that will be used to allow SSH access to the Bastion Host that gets created at provisioning time. If you intend to use the Bastion Host - then you will need to create your own SSH Keypair (typically done within the AWS EC2 console) ahead of time.

    • The required Terraform workstation_ip and key_name variables can be established multiple ways, one of which is to prefix the variable name with TF_VAR_ and have it then set as an environment variable within your shell, something like:

    • Linux: export TF_VAR_workstation_ip=202.10.23.16/32 and export TF_VAR_key_name=your_ssh_key_name

    • Windows: set TF_VAR_workstation_ip=202.10.23.16/32 and set TF_VAR_key_name=your_ssh_key_name

  • Terraform environment variables are documented here: https://www.terraform.io/cli/config/environment-variables

Exercise 4

Create an advanced AWS VPC to host a fully functioning cloud native application.

Cloud Native Application

The VPC will span 2 AZs, and have both public and private subnets. An internet gateway and NAT gateway will be deployed into it. Public and private route tables will be established. An application load balancer (ALB) will be installed which will load balance traffic across an auto scaling group (ASG) of Nginx web servers installed with the cloud native application frontend and API. A database instance running MongoDB will be installed in the private zone. Security groups will be created and deployed to secure all network traffic between the various components.

For demonstration purposes only - both the frontend and the API will be deployed to the same set of ASG instances - to reduce running costs.

https://github.com/cloudacademy/terraform-aws/tree/main/exercises/exercise4

AWS Architecture

The auto scaling web application layer bootstraps itself with both the Frontend and API components by pulling down their latest respective releases from the following repos:

The bootstrapping process for the Frontend and API components is codified within a template_cloudinit_config block located in the application module's main.tf file:

data "template_cloudinit_config" "config" {
  gzip          = false
  base64_encode = false

  #userdata
  part {
    content_type = "text/x-shellscript"
    content      = <<-EOF
    #! /bin/bash
    apt-get -y update
    apt-get -y install nginx
    apt-get -y install jq

    ALB_DNS=${aws_lb.alb1.dns_name}
    MONGODB_PRIVATEIP=${var.mongodb_ip}
    
    mkdir -p /tmp/cloudacademy-app
    cd /tmp/cloudacademy-app

    echo ===========================
    echo FRONTEND - download latest release and install...
    mkdir -p ./voteapp-frontend-react-2020
    pushd ./voteapp-frontend-react-2020
    curl -sL https://api.github.com/repos/cloudacademy/voteapp-frontend-react-2020/releases/latest | jq -r '.assets[0].browser_download_url' | xargs curl -OL
    INSTALL_FILENAME=$(curl -sL https://api.github.com/repos/cloudacademy/voteapp-frontend-react-2020/releases/latest | jq -r '.assets[0].name')
    tar -xvzf $INSTALL_FILENAME
    rm -rf /var/www/html
    cp -R build /var/www/html
    cat > /var/www/html/env-config.js << EOFF
    window._env_ = {REACT_APP_APIHOSTPORT: "$ALB_DNS"}
    EOFF
    popd

    echo ===========================
    echo API - download latest release, install, and start...
    mkdir -p ./voteapp-api-go
    pushd ./voteapp-api-go
    curl -sL https://api.github.com/repos/cloudacademy/voteapp-api-go/releases/latest | jq -r '.assets[] | select(.name | contains("linux-amd64")) | .browser_download_url' | xargs curl -OL
    INSTALL_FILENAME=$(curl -sL https://api.github.com/repos/cloudacademy/voteapp-api-go/releases/latest | jq -r '.assets[] | select(.name | contains("linux-amd64")) | .name')
    tar -xvzf $INSTALL_FILENAME
    #start the API up...
    MONGO_CONN_STR=mongodb://$MONGODB_PRIVATEIP:27017/langdb ./api &
    popd

    systemctl restart nginx
    systemctl status nginx
    echo fin v1.00!

    EOF    
  }
}

ALB Target Group Configuration

The ALB will configured with a single listener (port 80). 2 target groups will be established. The frontend target group points to the Nginx web server (port 80). The API target group points to the custom API service (port 8080).

AWS Architecture

Project Structure

├── main.tf
├── modules
│   ├── application
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── vars.tf
│   ├── bastion
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── vars.tf
│   ├── network
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── vars.tf
│   ├── security
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── vars.tf
│   └── storage
│       ├── install.sh
│       ├── main.tf
│       ├── outputs.tf
│       └── vars.tf
├── outputs.tf
├── terraform.tfvars
└── variables.tf

TF Variable Notes

  • workstation_ip: The Terraform variable workstation_ip represents your workstation's external perimeter public IP address, and needs to be represented using CIDR notation. This IP address is used later on within the Terraform infrastructure provisioning process to lock down SSH access on the instance(s) (provisioned by Terraform) - this is a security safety measure to prevent anyone else from attempting SSH access. The public IP address will be different and unique for each user - the easiest way to get this address is to type "what is my ip address" in a google search. As an example response, lets say Google responded with 202.10.23.16 - then the value assigned to the Terraform workstation_ip variable would be 202.10.23.16/32 (note the /32 is this case indicates that it is a single IP address).

  • key_name: The Terraform variable key_name represents the AWS SSH Keypair name that will be used to allow SSH access to the Bastion Host that gets created at provisioning time. If you intend to use the Bastion Host - then you will need to create your own SSH Keypair (typically done within the AWS EC2 console) ahead of time.

    • The required Terraform workstation_ip and key_name variables can be established multiple ways, one of which is to prefix the variable name with TF_VAR_ and have it then set as an environment variable within your shell, something like:

    • Linux: export TF_VAR_workstation_ip=202.10.23.16/32 and export TF_VAR_key_name=your_ssh_key_name

    • Windows: set TF_VAR_workstation_ip=202.10.23.16/32 and set TF_VAR_key_name=your_ssh_key_name

  • Terraform environment variables are documented here: https://www.terraform.io/cli/config/environment-variables

Exercise 5

Refactoring of the Cloud Native Application (excercise 4) to use Ansible for configuration management.

Cloud Native Application

The VPC will span 2 AZs, and have both public and private subnets. An internet gateway and NAT gateway will be deployed into it. Public and private route tables will be established. An application load balancer (ALB) will be installed which will load balance traffic across an auto scaling group (ASG) of Nginx web servers installed with the cloud native application frontend and API. A database instance running MongoDB will be installed in the private zone. Security groups will be created and deployed to secure all network traffic between the various components.

For demonstration purposes only - both the frontend and the API will be deployed to the same set of ASG instances - to reduce running costs.

https://github.com/cloudacademy/terraform-aws/tree/main/exercises/exercise5

AWS Architecture

The auto scaling web application layer bootstraps itself with both the Frontend and API components by pulling down their latest respective releases from the following repos:

The bootstrapping process for the Frontend and API components is now performed by Ansible. An Ansible playbook is executed from within the root main.tf file:

resource "null_resource" "ansible" {
  provisioner "local-exec" {
    interpreter = ["/bin/bash", "-c"]
    working_dir = "${path.module}/ansible"
    command     = <<EOT
      sleep 120 #time to allow VMs to come online and stabilize
      mkdir -p ./logs

      sed \
      -e 's/BASTION_IP/${module.bastion.public_ip}/g' \
      -e 's/WEB_IPS/${join("\\n", module.application.private_ips)}/g' \
      -e 's/MONGO_IP/${module.storage.private_ip}/g' \
      ./templates/hosts > hosts

      sed \
      -e 's/BASTION_IP/${module.bastion.public_ip}/g' \
      -e 's/SSH_KEY_NAME/${var.key_name}/g' \
      ./templates/ssh_config > ssh_config

      #required for macos only
      export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES

      #ANSIBLE
      ansible-playbook -v \
      -i hosts \
      --extra-vars "ALB_DNS=${module.application.dns_name}" \
      --extra-vars "MONGODB_PRIVATEIP=${module.storage.private_ip}" \
      ./playbooks/master.yml
      echo finished!
    EOT
  }

  depends_on = [
    module.bastion,
    module.application
  ]
}

ALB Target Group Configuration

The ALB will configured with a single listener (port 80). 2 target groups will be established. The frontend target group points to the Nginx web server (port 80). The API target group points to the custom API service (port 8080).

AWS Architecture

Project Structure

├── main.tf
├── ansible
│   ├── ansible.cfg
│   ├── logs
│   │   └── ansible.log
│   ├── playbooks
│   │   ├── database.yml
│   │   ├── deployapp.yml
│   │   ├── files
│   │   │   ├── api.sh
│   │   │   ├── db.sh
│   │   │   └── frontend.sh
│   │   └── master.yml
│   └── templates
│       ├── hosts
│       └── ssh_config
├── modules
│   ├── application
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── vars.tf
│   ├── bastion
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── vars.tf
│   ├── network
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── vars.tf
│   ├── security
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── vars.tf
│   └── storage
│       ├── install.sh
│       ├── main.tf
│       ├── outputs.tf
│       └── vars.tf
├── outputs.tf
├── terraform.tfvars
└── variables.tf

TF Variable Notes

  • workstation_ip: The Terraform variable workstation_ip represents your workstation's external perimeter public IP address, and needs to be represented using CIDR notation. This IP address is used later on within the Terraform infrastructure provisioning process to lock down SSH access on the instance(s) (provisioned by Terraform) - this is a security safety measure to prevent anyone else from attempting SSH access. The public IP address will be different and unique for each user - the easiest way to get this address is to type "what is my ip address" in a google search. As an example response, lets say Google responded with 202.10.23.16 - then the value assigned to the Terraform workstation_ip variable would be 202.10.23.16/32 (note the /32 is this case indicates that it is a single IP address).

  • key_name: The Terraform variable key_name represents the AWS SSH Keypair name that will be used to allow SSH access to the Bastion Host that gets created at provisioning time. If you intend to use the Bastion Host - then you will need to create your own SSH Keypair (typically done within the AWS EC2 console) ahead of time.

    • The required Terraform workstation_ip and key_name variables can be established multiple ways, one of which is to prefix the variable name with TF_VAR_ and have it then set as an environment variable within your shell, something like:

    • Linux: export TF_VAR_workstation_ip=202.10.23.16/32 and export TF_VAR_key_name=your_ssh_key_name

    • Windows: set TF_VAR_workstation_ip=202.10.23.16/32 and set TF_VAR_key_name=your_ssh_key_name

  • Terraform environment variables are documented here: https://www.terraform.io/cli/config/environment-variables

Exercise 6

Launch an EKS cluster and deploy a pre-built cloud native web app.

Stocks App

The following EKS architecture will be provisioned using Terraform:

EKS Cloud Native Application

The cloud native web app that gets deployed is based on the following codebase:

The following public AWS modules are used to launch the EKS cluster:

Additionally, the following providers are utilised:

The EKS cluster will be provisioned with 2 worker nodes based on m5.large spot instances. This configuration is suitable for the demonstration purposes of this exercise. Production environments are likely more suited to on-demand always on instances.

The cloud native web app deployed is configured within the ./k8s directory, and is installed automatically using the following null resource configuration:

resource "null_resource" "deploy_app" {
  triggers = {
    always_run = "${timestamp()}"
  }

  provisioner "local-exec" {
    interpreter = ["/bin/bash", "-c"]
    working_dir = path.module
    command     = <<EOT
      echo deploying app...
      ./k8s/app.install.sh
    EOT
  }

  depends_on = [
    helm_release.nginx_ingress
  ]
}

The Helm provider is used to automatically install the Nginx Ingress Controller at provisioning time:

provider "helm" {
  kubernetes {
    host                   = module.eks.cluster_endpoint
    cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

    exec {
      api_version = "client.authentication.k8s.io/v1beta1"
      command     = "aws"
      args        = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
    }
  }
}

resource "helm_release" "nginx_ingress" {
  name = "nginx-ingress"

  repository       = "https://helm.nginx.com/stable"
  chart            = "nginx-ingress"
  namespace        = "nginx-ingress"
  create_namespace = true

  set {
    name  = "service.type"
    value = "ClusterIP"
  }

  set {
    name  = "controller.service.name"
    value = "nginx-ingress-controller"
  }
}

Exercise 7

Deploy a set of serverless apps using API Gateway and Lambda Functions.

Stocks App

The following EKS architecture will be provisioned using Terraform:

EKS Cloud Native Application

The cloud native web app that gets deployed is based on the following codebase:

The following public AWS modules are used to launch the EKS cluster:

Additionally, the following providers are utilised:

The EKS cluster will be provisioned with 2 worker nodes based on m5.large spot instances. This configuration is suitable for the demonstration purposes of this exercise. Production environments are likely more suited to on-demand always on instances.

The cloud native web app deployed is configured within the ./k8s directory, and is installed automatically using the following null resource configuration:

resource "null_resource" "deploy_app" {
  triggers = {
    always_run = "${timestamp()}"
  }

  provisioner "local-exec" {
    interpreter = ["/bin/bash", "-c"]
    working_dir = path.module
    command     = <<EOT
      echo deploying app...
      ./k8s/app.install.sh
    EOT
  }

  depends_on = [
    helm_release.nginx_ingress
  ]
}

The Helm provider is used to automatically install the Nginx Ingress Controller at provisioning time:

provider "helm" {
  kubernetes {
    host                   = module.eks.cluster_endpoint
    cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

    exec {
      api_version = "client.authentication.k8s.io/v1beta1"
      command     = "aws"
      args        = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
    }
  }
}

resource "helm_release" "nginx_ingress" {
  name = "nginx-ingress"

  repository       = "https://helm.nginx.com/stable"
  chart            = "nginx-ingress"
  namespace        = "nginx-ingress"
  create_namespace = true

  set {
    name  = "service.type"
    value = "ClusterIP"
  }

  set {
    name  = "controller.service.name"
    value = "nginx-ingress-controller"
  }
}

More Repositories

1

intro-to-k8s

Assets used for the production of the Introduction to Kubernetes course on Cloud Academy
130
star
2

sentiment-analysis-aws-lambda

How to deploy a Machine Learning model for sentiment analysis in the Cloud with AWS Lambda.
Python
106
star
3

azure-overview

90
star
4

static-website-example

Static website to use with Cloud Academy labs
CSS
55
star
5

fraud-detection

Fraud Detection model build with Python (numpy, scipy, pandas, scikit-learn), based on anonymized credit card transactions. The dataset is publicly available here: https://clouda-labs-assets.s3-us-west-2.amazonaws.com/fraud-detection/creditcard.csv.zip
Python
52
star
6

gcp-overview

32
star
7

aws-xray-microservices-calc

Dockerised Microservices Calculator - demonstrating AWS X-Ray instrumentation and telemetry
JavaScript
31
star
8

Store2018

Dotnet Core Microservices Implementation
C#
28
star
9

ca-pandas-webinars

This is the companion repository of the Cloud Academy Webinar Series on Pandas.
Jupyter Notebook
22
star
10

awslambda2googlefunctions

Rewrite-js transformation module to map AWS Lambda functions into Google Functions.
JavaScript
22
star
11

ca-python-serverless

JavaScript
21
star
12

voteapp-k8s

voteapp-k8s
20
star
13

introduction_to_docker

The course assets used in the updated (Jan, 2018) version of Introduction to Docker, on Cloud Academy.
Go
17
star
14

aurora-multimaster

AWS Aurora Multi Master Demonstration
Python
16
star
15

voteapp-api-go

Go based API for Programming Language Voting Application
Go
15
star
16

azure-databricks

Jupyter Notebook
15
star
17

aws-cloudformation

Contains AWS CloudFormation templates - used in CloudAcademy courses
Python
14
star
18

gitops-demo

gitops-demo
Python
14
star
19

openshift-voteapp-demo

CloudAcademy OpenShift VoteApp Deployment Demo
14
star
20

optimizing-bigquery

12
star
21

dynamodb-globaltables

AWS DynamoDB Global Tables Demonstration
12
star
22

helm-demo

Helm Demonstration Resources
Smarty
11
star
23

terraformer-examples

This repo accompanies the course on Cloud Academy covering Terraformer.
HCL
11
star
24

ca-webinars-mastering-machine-learning

Companion for the Cloud Academy Webinar Series "Mastering Machine Learning"
Jupyter Notebook
11
star
25

helm-repo

Helm Repo Demo
10
star
26

voteapp-frontend-react

React based Programming Language Voting Application
JavaScript
9
star
27

aks-voteapp-demo

Azure AKS VoteApp Demo
9
star
28

azure-networks-and-dns

9
star
29

k8s-wordpress

Kubernetes Wordpress Deployment
9
star
30

s3ftp

S3 FTP: Build a Reliable and Inexpensive FTP Server Using Amazon’s S3
Shell
8
star
31

azureml-intro

8
star
32

devops-intro-to-prometheus

This repo contains scripts and infromation about the introduction to prometheus course.
Shell
8
star
33

azure-chatbot

HTML
7
star
34

learn-go

Introduction to the Go Programming Language
7
star
35

aws-developer-fundamentals

HTML
7
star
36

bigquery-intro

7
star
37

voteapp-frontend-react-2020

Voteapp Frontend React 2020
JavaScript
7
star
38

using-the-azure-machine-learning-sdk

Jupyter Notebook
7
star
39

advanced-use-of-cloudformation

JavaScript
7
star
40

synapse-intro

Jupyter Notebook
6
star
41

devops-webapp

Demonstration Java Servlet based WebApp using Gradle for Build Management
Java
6
star
42

aws-lexv2-chatbot

AWS Lex v2 Chatbot
Shell
6
star
43

azure-data-lake-gen2

6
star
44

dataproc-intro

Python
6
star
45

cloudmotors

CloudMotors.lab from Cloud Academy.com
Ruby
6
star
46

pizza-time

This repo is part of the Hands-on preparation for the AWS Certified SysOps Administrator exam course at Cloud Academy
HTML
6
star
47

awsdevtools-demo

AWS Developer Tools Demo - Web Application
HTML
5
star
48

csharp-tdd-bitcoinconverter

Dotnet Core 3.1 CSharp TDD Bitcoin Converter Project
C#
5
star
49

language-vote-app

Full Installer Script for Programming Language Voting Application
Shell
5
star
50

ebs-shrinker

AWS EBS Volume Shrinker Demo
5
star
51

aiplatform-intro

Python
5
star
52

knative-demo

Knative Demo
Go
5
star
53

k8s-lab-observability1

k8s-lab-observability1
Shell
5
star
54

devops-jenkins-jira

Jenkins + Jira Integration
5
star
55

example-git-repo

A static website to use with git-focused Cloud Academy labs
CSS
5
star
56

ca-webinar-ml-pipelines-overview

Jupyter Notebook
5
star
57

devops-jenkins-sonarqube

DevOps - Jenkins + SonarQube Integration
5
star
58

advanced-api-gateway-resources

Cloud Academy - Advanced API Gateway
JavaScript
5
star
59

Apache-Prometheus-Exporter

This repo accompanies the Cloud Academy course on configuring a Prometheus exporter for an Apache webserver.
Dockerfile
4
star
60

configuring-aks

4
star
61

azure-get-started-app-service

C#
4
star
62

devops-jenkins-docker-splunk

Jenkins + Docker + Splunk Integration
4
star
63

dynamo-demo

JavaScript
4
star
64

beanstalk-php

CSS
4
star
65

eks-k8s-wordpress

EKS K8s Wordpress Deployment
3
star
66

aws-machinelearning

Contains source code used for AWS Machine Learning course
JavaScript
3
star
67

mlengine-intro

Python
3
star
68

azure-lab-provisioners

Scripts for Azure Lab VM provisioning
PowerShell
3
star
69

data-visualization-with-python-using-matplotlib

Python
3
star
70

devops-jenkins-artifactory

DevOps - Jenkins + Artifactory Integration
3
star
71

gcp-vcf-env

VS Code environment for working with GCP VCFs
Python
3
star
72

terraform-aws-calabmodules

Terraform modules used in hands on Cloud Academy lab.
HCL
3
star
73

azure-sql-database-monitoring-dp-300

SQL scripts related to the Azure SQL and SQL Server Database Monitoring course
TSQL
3
star
74

lab-utils

Repository stores support material for Cloud Academy labs
HTML
3
star
75

gcp-lab-artifacts

Artifacts for GCP Labs
Shell
3
star
76

godemo

Golang Demonstrations
Go
3
star
77

ml-engine-doing-more

Python
3
star
78

Sentinel-Demo

HCL
3
star
79

react-webapp

JavaScript
3
star
80

azure-app-service-github

3
star
81

azure-recommendation-engine

3
star
82

azure-lab-artifacts

Jupyter Notebook
3
star
83

bash-scripting-shell-programming

3
star
84

openai_storyteller

A Python-based web application used to demonstrate the OpenAI Assistants API
Python
3
star
85

ca-webinar-data-science-visualization

This repo is a companion for the Cloud Academy Webinar entitled "Boost your Data Science Career with proper Data Visualisation"
Jupyter Notebook
3
star
86

inspire-machine-learning

This repository is used as a companion for the Inspire Webinar Series on Machine Learning
Jupyter Notebook
2
star
87

csharp-loops-deep-dive

C#
2
star
88

intro-to-arm

PowerShell
2
star
89

openshift-voteapp-frontend-react

OpenShift Vote App Frontend - React
JavaScript
2
star
90

sca-lab

Static Code Analysis in CI/CD Pipeline Lab Resources
HTML
2
star
91

azure-sql-query-tuning-dp-300

Scripts related to the Azure SQL and SQL Server Query Performance Tuning course
TSQL
2
star
92

java-tdd-bitcoinconverter

Java JDK11 TDD Bitcoin Converter Project
Java
2
star
93

terraform-azure-ansible-vm

Terraform + Ansible Template - Create and Setup Nginx Web Server
SCSS
2
star
94

ml-engine-intro

Python
2
star
95

labs-aws-step-functions

2
star
96

devops

DevOps
C#
2
star
97

insecure-webapp

Insecure Web App Example
Java
2
star
98

ca-code-server

Dockerfile
2
star
99

ca-machine-learning-with-scikit-learn

2
star
100

aws-ecsdemo

Contains source code used for AWS CodeBuild, ECS, and ECR demo
JavaScript
2
star