• Stars
    star
    1,582
  • Rank 28,433 (Top 0.6 %)
  • Language
    Shell
  • License
    Apache License 2.0
  • Created over 8 years ago
  • Updated 9 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Get up and running with Jenkins on Google Kubernetes Engine

Lab: Build a Continuous Deployment Pipeline with Jenkins and Kubernetes

For a more in depth best practices guide, go to the solution posted here.

Introduction

This guide will take you through the steps necessary to continuously deliver your software to end users by leveraging Google Container Engine and Jenkins to orchestrate the software delivery pipeline. If you are not familiar with basic Kubernetes concepts, have a look at Kubernetes 101.

In order to accomplish this goal you will use the following Jenkins plugins:

  • Jenkins Kubernetes Plugin - start Jenkins build executor containers in the Kubernetes cluster when builds are requested, terminate those containers when builds complete, freeing resources up for the rest of the cluster
  • Jenkins Pipelines - define our build pipeline declaratively and keep it checked into source code management alongside our application code
  • Google Oauth Plugin - allows you to add your google oauth credentials to jenkins

In order to deploy the application with Kubernetes you will use the following resources:

  • Deployments - replicates our application across our kubernetes nodes and allows us to do a controlled rolling update of our software across the fleet of application instances
  • Services - load balancing and service discovery for our internal services
  • Ingress - external load balancing and SSL termination for our external service
  • Secrets - secure storage of non public configuration information, SSL certs specifically in our case

Prerequisites

  1. A Google Cloud Platform Account
  2. Enable the Compute Engine, Container Engine, and Container Builder APIs

Do this first

In this section you will start your Google Cloud Shell and clone the lab code repository to it.

  1. Create a new Google Cloud Platform project: https://console.developers.google.com/project

  2. Click the Activate Cloud Shell icon in the top-right and wait for your shell to open.

    If you are prompted with a Learn more message, click Continue to finish opening the Cloud Shell.

  3. When the shell is open, use the gcloud command line interface tool to set your default compute zone:

    gcloud config set compute/zone us-east1-d

    Output (do not copy):

    Updated property [compute/zone].
    
  4. Set an environment variable with your project:

    export GOOGLE_CLOUD_PROJECT=$(gcloud config get-value project)

    Output (do not copy):

    Your active configuration is: [cloudshell-...]
    
  5. Clone the lab repository in your cloud shell, then cd into that dir:

    git clone https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes.git

    Output (do not copy):

    Cloning into 'continuous-deployment-on-kubernetes'...
    ...
    
    cd continuous-deployment-on-kubernetes

Create a Service Account with permissions

  1. Create a service account, on Google Cloud Platform (GCP).

    Create a new service account because it's the recommended way to avoid using extra permissions in Jenkins and the cluster.

    gcloud iam service-accounts create jenkins-sa \
        --display-name "jenkins-sa"

    Output (do not copy):

    Created service account [jenkins-sa].
    
  2. Add required permissions, to the service account, using predefined roles.

    Most of these permissions are related to Jenkins use of Cloud Build, and storing/retrieving build artifacts in Cloud Storage. Also, the service account needs to enable the Jenkins agent to read from a repo you will create in Cloud Source Repositories (CSR).

    gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
        --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role "roles/viewer"
    
    gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
        --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role "roles/source.reader"
    
    gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
        --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role "roles/storage.admin"
    
    gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
        --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role "roles/storage.objectAdmin"
    
    gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
        --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role "roles/cloudbuild.builds.editor"
    
    gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
        --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role "roles/container.developer"

    You can check the permissions added using IAM & admin in Cloud Console.

  3. Export the service account credentials to a JSON key file in Cloud Shell:

    gcloud iam service-accounts keys create ~/jenkins-sa-key.json \
        --iam-account "jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com"

    Output (do not copy):

    created key [...] of type [json] as [/home/.../jenkins-sa-key.json] for [[email protected]]
    
  4. Download the JSON key file to your local machine.

    Click Download File from More on the Cloud Shell toolbar:

  5. Enter the File path as jenkins-sa-key.json and click Download.

    The file will be downloaded to your local machine, for use later.

Create a Kubernetes Cluster

  1. Provision the cluster with gcloud:

    Use Google Kubernetes Engine (GKE) to create and manage your Kubernetes cluster, named jenkins-cd. Use the service account created earlier.

    gcloud container clusters create jenkins-cd \
      --num-nodes 2 \
      --machine-type n1-standard-2 \
      --cluster-version 1.15 \
      --service-account "jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com"

    Output (do not copy):

    NAME        LOCATION    MASTER_VERSION  MASTER_IP     MACHINE_TYPE  NODE_VERSION   NUM_NODES  STATUS
    jenkins-cd  us-east1-d  1.15.11-gke.15   35.229.29.69  n1-standard-2 1.15.11-gke.15  2          RUNNING
    
  2. Once that operation completes, retrieve the credentials for your cluster.

    gcloud container clusters get-credentials jenkins-cd

    Output (do not copy):

    Fetching cluster endpoint and auth data.
    kubeconfig entry generated for jenkins-cd.
    
  3. Confirm that the cluster is running and kubectl is working by listing pods:

    kubectl get pods

    Output (do not copy):

    No resources found.
    

    You would see an error if the cluster was not created, or you did not have permissions.

  4. Add yourself as a cluster administrator in the cluster's RBAC so that you can give Jenkins permissions in the cluster:

    kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value account)

    Output (do not copy):

    Your active configuration is: [cloudshell-...]
    clusterrolebinding.rbac.authorization.k8s.io/cluster-admin-binding created
    

Install Helm

In this lab, you will use Helm to install Jenkins with a stable chart. Helm is a package manager that makes it easy to configure and deploy Kubernetes applications. Once you have Jenkins installed, you'll be able to set up your CI/CD pipleline.

  1. Download and install the helm binary

    wget https://get.helm.sh/helm-v3.2.1-linux-amd64.tar.gz
  2. Unzip the file to your local system:

    tar zxfv helm-v3.2.1-linux-amd64.tar.gz
    cp linux-amd64/helm .
  3. Add the official stable repository.

    ./helm repo add stable https://kubernetes-charts.storage.googleapis.com
  4. Ensure Helm is properly installed by running the following command. You should see version v3.2.1 appear:

    ./helm version

    Output (do not copy):

    version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}
    

Configure and Install Jenkins

You will use a custom values file to add the GCP specific plugin necessary to use service account credentials to reach your Cloud Source Repository.

  1. Use the Helm CLI to deploy the chart with your configuration set.

    ./helm install cd-jenkins -f jenkins/values.yaml stable/jenkins --version 1.7.3 --wait

    Output (do not copy):

    ...
    For more information on running Jenkins on Kubernetes, visit:
    https://cloud.google.com/solutions/jenkins-on-container-engine
    
  2. The Jenkins pod STATUS should change to Running when it's ready:

    kubectl get pods

    Output (do not copy):

    NAME                          READY     STATUS    RESTARTS   AGE
    cd-jenkins-7c786475dd-vbhg4   1/1       Running   0          1m
    
  3. Configure the Jenkins service account to be able to deploy to the cluster.

    kubectl create clusterrolebinding jenkins-deploy --clusterrole=cluster-admin --serviceaccount=default:cd-jenkins

    Output (do not copy):

    clusterrolebinding.rbac.authorization.k8s.io/jenkins-deploy created
    
  4. Set up port forwarding to the Jenkins UI, from Cloud Shell:

    export JENKINS_POD_NAME=$(kubectl get pods -l "app.kubernetes.io/component=jenkins-master" -o jsonpath="{.items[0].metadata.name}")
    kubectl port-forward $JENKINS_POD_NAME 8080:8080 >> /dev/null &
  5. Now, check that the Jenkins Service was created properly:

    kubectl get svc

    Output (do not copy):

    NAME               CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
    cd-jenkins         10.35.249.67   <none>        8080/TCP    3h
    cd-jenkins-agent   10.35.248.1    <none>        50000/TCP   3h
    kubernetes         10.35.240.1    <none>        443/TCP     9h
    

    This Jenkins configuration is using the Kubernetes Plugin, so that builder nodes will be automatically launched as necessary when the Jenkins master requests them. Upon completion of the work, the builder nodes will be automatically turned down, and their resources added back to the cluster's resource pool.

    Notice that this service exposes ports 8080 and 50000 for any pods that match the selector. This will expose the Jenkins web UI and builder/agent registration ports within the Kubernetes cluster. Additionally the jenkins-ui services is exposed using a ClusterIP so that it is not accessible from outside the cluster.

Connect to Jenkins

  1. The Jenkins chart will automatically create an admin password for you. To retrieve it, run:

    printf $(kubectl get secret cd-jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
  2. To get to the Jenkins user interface, click on the Web Preview button in cloud shell, then click Preview on port 8080:

You should now be able to log in with username admin and your auto generated password.

Your progress, and what's next

You've got a Kubernetes cluster managed by GKE. You've deployed:

  • a Jenkins Deployment
  • a (non-public) service that exposes Jenkins to its agent containers

You have the tools to build a continuous deployment pipeline. Now you need a sample app to deploy continuously.

The sample app

You'll use a very simple sample application - gceme - as the basis for your CD pipeline. gceme is written in Go and is located in the sample-app directory in this repo. When you run the gceme binary on a GCE instance, it displays the instance's metadata in a pretty card:

The binary supports two modes of operation, designed to mimic a microservice. In backend mode, gceme will listen on a port (8080 by default) and return GCE instance metadata as JSON, with content-type=application/json. In frontend mode, gceme will query a backend gceme service and render that JSON in the UI you saw above. It looks roughly like this:

-----------      ------------      ~~~~~~~~~~~~        -----------
|         |      |          |      |          |        |         |
|  user   | ---> |   gceme  | ---> | lb/proxy | -----> |  gceme  |
|(browser)|      |(frontend)|      |(optional)|   |    |(backend)|
|         |      |          |      |          |   |    |         |
-----------      ------------      ~~~~~~~~~~~~   |    -----------
                                                  |    -----------
                                                  |    |         |
                                                  |--> |  gceme  |
                                                       |(backend)|
                                                       |         |
                                                       -----------

Both the frontend and backend modes of the application support two additional URLs:

  1. /version prints the version of the binary (declared as a const in main.go)
  2. /healthz reports the health of the application. In frontend mode, health will be OK if the backend is reachable.

Deploy the sample app to Kubernetes

In this section you will deploy the gceme frontend and backend to Kubernetes using Kubernetes manifest files (included in this repo) that describe the environment that the gceme binary/Docker image will be deployed to. They use a default gceme Docker image that you will be updating with your own in a later section.

You'll have two primary environments - canary and production - and use Kubernetes to manage them.

Note: The manifest files for this section of the tutorial are in sample-app/k8s. You are encouraged to open and read each one before creating it per the instructions.

  1. First change directories to the sample-app, back in Cloud Shell:

    cd sample-app
  2. Create the namespace for production:

    kubectl create ns production

    Output (do not copy):

    namespace/production created
    
  3. Create the production Deployments for frontend and backend:

    kubectl --namespace=production apply -f k8s/production

    Output (do not copy):

    deployment.extensions/gceme-backend-production created
    deployment.extensions/gceme-frontend-production created
    
  4. Create the canary Deployments for frontend and backend:

    kubectl --namespace=production apply -f k8s/canary

    Output (do not copy):

    deployment.extensions/gceme-backend-canary created
    deployment.extensions/gceme-frontend-canary created
    
  5. Create the Services for frontend and backend:

    kubectl --namespace=production apply -f k8s/services

    Output (do not copy):

    service/gceme-backend created
    service/gceme-frontend created
    
  6. Scale the production, frontend service:

    kubectl --namespace=production scale deployment gceme-frontend-production --replicas=4

    Output (do not copy):

    deployment.extensions/gceme-frontend-production scaled
    
  7. Retrieve the External IP for the production services:

    This field may take a few minutes to appear as the load balancer is being provisioned

    kubectl --namespace=production get service gceme-frontend

    Output (do not copy):

    NAME             TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
    gceme-frontend   LoadBalancer   10.35.254.91   35.196.48.78   80:31088/TCP   1m
    
  8. Confirm that both services are working by opening the frontend EXTERNAL-IP in your browser

  9. Poll the production endpoint's /version URL.

    Open a new Cloud Shell terminal by clicking the + button to the right of the current terminal's tab.

    export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}"  --namespace=production services gceme-frontend)
    while true; do curl http://$FRONTEND_SERVICE_IP/version; sleep 3;  done

    Output (do not copy):

    1.0.0
    1.0.0
    1.0.0
    

    You should see that all requests are serviced by v1.0.0 of the application.

    Leave this running in the second terminal so you can easily observe rolling updates in the next section.

  10. Return to the first terminal/tab in Cloud Shell.

Create a repository for the sample app source

Here you'll create your own copy of the gceme sample app in Cloud Source Repository.

  1. Initialize the git repository.

    Make sure to work from the sample-app directory of the repo you cloned previously.

    git init
    git config credential.helper gcloud.sh
    gcloud source repos create gceme
  2. Add a git remote for the new repo in Cloud Source Repositories.

    git remote add origin https://source.developers.google.com/p/$GOOGLE_CLOUD_PROJECT/r/gceme
  3. Ensure git is able to identify you:

    git config --global user.email "YOUR-EMAIL-ADDRESS"
    git config --global user.name "YOUR-NAME"
  4. Add, commit, and push all the files:

    git add .
    git commit -m "Initial commit"
    git push origin master

    Output (do not copy):

    To https://source.developers.google.com/p/myproject/r/gceme
     * [new branch]      master -> master
    

Create a pipeline

You'll now use Jenkins to define and run a pipeline that will test, build, and deploy your copy of gceme to your Kubernetes cluster. You'll approach this in phases. Let's get started with the first.

Phase 1: Add your service account credentials

First, you will need to configure GCP credentials in order for Jenkins to be able to access the code repository:

  1. In the Jenkins UI, Click Credentials on the left

  2. Click the (global) link

  3. Click Add Credentials on the left

  4. From the Kind dropdown, select Google Service Account from private key

  5. Enter the Project Name from your project

  6. Leave JSON key selected, and click Choose File.

  7. Select the jenkins-sa-key.json file downloaded earlier, then click Open.

  8. Click OK

You should now see 1 global credential. Make a note of the name of the credential, as you will reference this in Phase 2.

Phase 2: Create a job

This lab uses Jenkins Pipeline to define builds as groovy scripts.

Navigate to your Jenkins UI and follow these steps to configure a Pipeline job (hot tip: you can find the IP address of your Jenkins install with kubectl get ingress --namespace jenkins):

  1. Click the Jenkins link in the top left toolbar, of the ui

  2. Click the New Item link in the left nav

  3. For item name use sample-app, choose the Multibranch Pipeline option, then click OK

  4. Click Add source and choose git

  5. Paste the HTTPS clone URL of your gceme repo on Cloud Source Repositories into the Project Repository field. It will look like: https://source.developers.google.com/p/[REPLACE_WITH_YOUR_PROJECT_ID]/r/gceme

  6. From the Credentials dropdown, select the name of the credential from Phase 1. It should have the format PROJECT_ID service account.

  7. Under Scan Multibranch Pipeline Triggers section, check the Periodically if not otherwise run box, then set the Interval value to 1 minute.

  8. Click Save, leaving all other options with default values.

    A Branch indexing job was kicked off to identify any branches in your repository.

  9. Click Jenkins > sample-app, in the top menu.

    You should see the master branch now has a job created for it.

    The first run of the job will fail, until the project name is set properly in the Jenkinsfile next step.

Phase 3: Modify Jenkinsfile, then build and test the app

  1. Create a branch for the canary environment called canary

    git checkout -b canary

    Output (do not copy):

    Switched to a new branch 'canary'
    

    The Jenkinsfile is written using the Jenkins Workflow DSL, which is Groovy-based. It allows an entire build pipeline to be expressed in a single script that lives alongside your source code and supports powerful features like parallelization, stages, and user input.

  2. Update your Jenkinsfile script with the correct PROJECT environment value.

    Be sure to replace REPLACE_WITH_YOUR_PROJECT_ID with your project name.

    Save your changes, but don't commit the new Jenkinsfile change just yet. You'll make one more change in the next section, then commit and push them together.

Phase 4: Deploy a canary release to canary

Now that your pipeline is working, it's time to make a change to the gceme app and let your pipeline test, package, and deploy it.

The canary environment is rolled out as a percentage of the pods behind the production load balancer. In this case we have 1 out of 5 of our frontends running the canary code and the other 4 running the production code. This allows you to ensure that the canary code is not negatively affecting users before rolling out to your full fleet. You can use the labels env: production and env: canary in Google Cloud Monitoring in order to monitor the performance of each version individually.

  1. In the sample-app repository on your workstation open html.go and replace the word blue with orange (there should be exactly two occurrences):
//snip
<div class="card orange">
<div class="card-content white-text">
<div class="card-title">Backend that serviced this request</div>
//snip
  1. In the same repository, open main.go and change the version number from 1.0.0 to 2.0.0:

    //snip
    const version string = "2.0.0"
    //snip
  2. Push the version 2 changes to the repo:

    git add Jenkinsfile html.go main.go
    git commit -m "Version 2"
    git push origin canary
  3. Revisit your sample-app in the Jenkins UI.

    Navigate back to your Jenkins sample-app job. Notice a canary pipeline job has been created.

  4. Follow the canary build output.

    • Click the Canary link.
    • Click the #1 link the Build History box, on the lower left.
    • Click Console Output from the left-side menu.
    • Scroll down to follow.
  5. Track the output for a few minutes.

    When you see Finished: SUCCESS, open the Cloud Shell terminal that you left polling /version of canary. Observe that some requests are now handled by the canary 2.0.0 version.

    1.0.0
    1.0.0
    1.0.0
    1.0.0
    2.0.0
    2.0.0
    1.0.0
    1.0.0
    1.0.0
    1.0.0
    

    You have now rolled out that change, version 2.0.0, to a subset of users.

  6. Continue the rollout, to the rest of your users.

    Back in the other Cloud Shell terminal, create a branch called production, then push it to the Git server.

     git checkout master
     git merge canary
     git push origin master
  7. Watch the pipelines in the Jenkins UI handle the change.

    Within a minute or so, you should see a new job in the Build Queue and Build Executor.

  8. Clicking on the master link will show you the stages of your pipeline as well as pass/fail and timing characteristics.

    You can see the failed master job #1, and the successful master job #2.

  9. Check the Cloud Shell terminal responses again.

    In Cloud Shell, open the terminal polling canary's /version URL and observe that the new version, 2.0.0, has been rolled out and is serving all requests.

    2.0.0
    2.0.0
    2.0.0
    2.0.0
    2.0.0
    2.0.0
    2.0.0
    2.0.0
    2.0.0
    2.0.0
    

If you want to understand the pipeline stages in greater detail, you can look through the Jenkinsfile in the sample-app project directory.

Phase 5: Deploy a development branch

Oftentimes changes will not be so trivial that they can be pushed directly to the canary environment. In order to create a development environment, from a long lived feature branch, all you need to do is push it up to the Git server. Jenkins will automatically deploy your development environment.

In this case you will not use a loadbalancer, so you'll have to access your application using kubectl proxy. This proxy authenticates itself with the Kubernetes API and proxies requests from your local machine to the service in the cluster without exposing your service to the internet.

Deploy the development branch

  1. Create another branch and push it up to the Git server

    git checkout -b new-feature
    git push origin new-feature
  2. Open Jenkins in your web browser and navigate back to sample-app.

    You should see that a new job called new-feature has been created, and this job is creating your new environment.

  3. Navigate to the console output of the first build of this new job by:

    • Click the new-feature link in the job list.
    • Click the #1 link in the Build History list on the left of the page.
    • Finally click the Console Output link in the left menu.
  4. Scroll to the bottom of the console output of the job to see instructions for accessing your environment:

    Successfully verified extensions/v1beta1/Deployment: gceme-frontend-dev
    AvailableReplicas = 1, MinimumReplicas = 1
    
    [Pipeline] echo
    To access your environment run `kubectl proxy`
    [Pipeline] echo
    Then access your service via
    http://localhost:8001/api/v1/proxy/namespaces/new-feature/services/gceme-frontend:80/
    [Pipeline] }
    

Access the development branch

  1. Set up port forwarding to the dev frontend, from Cloud Shell:

    export DEV_POD_NAME=$(kubectl get pods -n new-feature -l "app=gceme,env=dev,role=frontend" -o jsonpath="{.items[0].metadata.name}")
    kubectl port-forward -n new-feature $DEV_POD_NAME 8001:80 >> /dev/null &
  2. Access your application via localhost:

    curl http://localhost:8001/api/v1/proxy/namespaces/new-feature/services/gceme-frontend:80/

    Output (do not copy):

    <!doctype html>
    <html>
    ...
    </div>
    <div class="col s2">&nbsp;</div>
    </div>
    </div>
    </html>
    

    Look through the response output for "card orange" that was changed earlier.

  3. You can now push code changes to the new-feature branch in order to update your development environment.

  4. Once you are done, merge your new-feature branch back into the canary branch to deploy that code to the canary environment:

    git checkout canary
    git merge new-feature
    git push origin canary
  5. When you are confident that your code won't wreak havoc in production, merge from the canary branch to the master branch. Your code will be automatically rolled out in the production environment:

    git checkout master
    git merge canary
    git push origin master
  6. When you are done with your development branch, delete it from Cloud Source Repositories, then delete the environment in Kubernetes:

    git push origin :new-feature
    kubectl delete ns new-feature

Extra credit: deploy a breaking change, then roll back

Make a breaking change to the gceme source, push it, and deploy it through the pipeline to production. Then pretend latency spiked after the deployment and you want to roll back. Do it! Faster!

Things to consider:

  • What is the Docker image you want to deploy for roll back?
  • How can you interact directly with the Kubernetes to trigger the deployment?
  • Is SRE really what you want to do with your life?

Clean up

Clean up is really easy, but also super important: if you don't follow these instructions, you will continue to be billed for the GKE cluster you created.

To clean up, navigate to the Google Developers Console Project List, choose the project you created for this lab, and delete it. That's it.

More Repositories

1

microservices-demo

Sample cloud-first application with 10 microservices showcasing Kubernetes, Istio, and gRPC.
Go
15,783
star
2

terraformer

CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code
Go
11,610
star
3

training-data-analyst

Labs and demos for courses for GCP Training (http://cloud.google.com/training).
Jupyter Notebook
7,479
star
4

python-docs-samples

Code samples used on cloud.google.com
Jupyter Notebook
6,985
star
5

generative-ai

Sample code and notebooks for Generative AI on Google Cloud
Jupyter Notebook
5,282
star
6

golang-samples

Sample apps and code written for Google Cloud in the Go programming language.
Go
4,136
star
7

nodejs-docs-samples

Node.js samples for Google Cloud Platform products.
JavaScript
2,762
star
8

tensorflow-without-a-phd

A crash course in six episodes for software developers who want to become machine learning practitioners.
Jupyter Notebook
2,735
star
9

professional-services

Common solutions and tools developed by Google Cloud's Professional Services team. This repository and its contents are not an officially supported Google product.
Python
2,730
star
10

gcsfuse

A user-space file system for interacting with Google Cloud Storage
Go
1,977
star
11

community

Java
1,908
star
12

PerfKitBenchmarker

PerfKit Benchmarker (PKB) contains a set of benchmarks to measure and compare cloud offerings. The benchmarks use default settings to reflect what most users will see. PerfKit Benchmarker is licensed under the Apache 2 license terms. Please make sure to read, understand and agree to the terms of the LICENSE and CONTRIBUTING files before proceeding.
Python
1,855
star
13

java-docs-samples

Java and Kotlin Code samples used on cloud.google.com
Java
1,610
star
14

ml-design-patterns

Source code accompanying O'Reilly book: Machine Learning Design Patterns
Jupyter Notebook
1,600
star
15

cloudml-samples

Cloud ML Engine repo. Please visit the new Vertex AI samples repo at https://github.com/GoogleCloudPlatform/vertex-ai-samples
Python
1,507
star
16

asl-ml-immersion

This repos contains notebooks for the Advanced Solutions Lab: ML Immersion
Jupyter Notebook
1,469
star
17

localllm

Python
1,449
star
18

cloud-builders

Builder images and examples commonly used for Google Cloud Build
Go
1,354
star
19

cloud-foundation-fabric

End-to-end modular samples and landing zones toolkit for Terraform on GCP.
HCL
1,343
star
20

vertex-ai-samples

Sample code and notebooks for Vertex AI, the end-to-end machine learning platform on Google Cloud
Jupyter Notebook
1,331
star
21

cloud-builders-community

Community-contributed images for Google Cloud Build
Go
1,233
star
22

data-science-on-gcp

Source code accompanying book: Data Science on the Google Cloud Platform, Valliappa Lakshmanan, O'Reilly 2017
Jupyter Notebook
1,230
star
23

berglas

A tool for managing secrets on Google Cloud
Go
1,223
star
24

cloud-sql-proxy

A utility for connecting securely to your Cloud SQL instances
Go
1,218
star
25

kubernetes-engine-samples

Sample applications for Google Kubernetes Engine (GKE)
HCL
1,178
star
26

functions-framework-nodejs

FaaS (Function as a service) framework for writing portable Node.js functions
TypeScript
1,162
star
27

cloud-vision

Sample code for Google Cloud Vision
Python
1,093
star
28

DataflowTemplates

Cloud Dataflow Google-provided templates for solving in-Cloud data tasks
Java
1,078
star
29

bigquery-utils

Useful scripts, udfs, views, and other utilities for migration and data warehouse operations in BigQuery.
Java
1,030
star
30

php-docs-samples

A collection of samples that demonstrate how to call Google Cloud services from PHP.
PHP
944
star
31

buildpacks

Builders and buildpacks designed to run on Google Cloud's container platforms
Go
937
star
32

deploymentmanager-samples

Deployment Manager samples and templates.
Jinja
928
star
33

bank-of-anthos

Retail banking sample application showcasing Kubernetes and Google Cloud
Java
926
star
34

cloud-foundation-toolkit

The Cloud Foundation toolkit provides GCP best practices as code.
Go
916
star
35

flask-talisman

HTTP security headers for Flask
Python
896
star
36

DataflowJavaSDK

Google Cloud Dataflow provides a simple, powerful model for building both batch and streaming parallel data processing pipelines.
857
star
37

gsutil

A command line tool for interacting with cloud storage services.
Python
857
star
38

k8s-config-connector

GCP Config Connector, a Kubernetes add-on for managing GCP resources
Go
826
star
39

nodejs-getting-started

A tutorial for creating a complete application using Node.js on Google Cloud Platform
JavaScript
800
star
40

keras-idiomatic-programmer

Books, Presentations, Workshops, Notebook Labs, and Model Zoo for Software Engineers and Data Scientists wanting to learn the TF.Keras Machine Learning framework
Jupyter Notebook
797
star
41

gcr-cleaner

Delete untagged image refs in Google Container Registry or Artifact Registry
Go
795
star
42

metacontroller

Lightweight Kubernetes controllers as a service
Go
790
star
43

getting-started-python

Code samples for using Python on Google Cloud Platform
Python
756
star
44

magic-modules

Add Google Cloud Platform support to Terraform
HTML
753
star
45

awesome-google-cloud

A curated list of awesome stuff for Google Cloud.
742
star
46

mlops-on-gcp

Jupyter Notebook
728
star
47

dotnet-docs-samples

.NET code samples used on https://cloud.google.com
C#
719
star
48

click-to-deploy

Source for Google Click to Deploy solutions listed on Google Cloud Marketplace.
Ruby
709
star
49

cloud-sdk-docker

Google Cloud CLI Docker Image - Docker Image containing the gcloud CLI and its bundled components.
Dockerfile
697
star
50

iap-desktop

IAP Desktop is a Windows application that provides zero-trust Remote Desktop and SSH access to Linux and Windows VMs on Google Cloud.
C#
687
star
51

tf-estimator-tutorials

This repository includes tutorials on how to use the TensorFlow estimator APIs to perform various ML tasks, in a systematic and standardised way
Jupyter Notebook
671
star
52

functions-framework-python

FaaS (Function as a service) framework for writing portable Python functions
Python
670
star
53

flink-on-k8s-operator

[DEPRECATED] Kubernetes operator for managing the lifecycle of Apache Flink and Beam applications.
Go
659
star
54

terraform-google-examples

Collection of examples for using Terraform with Google Cloud Platform.
HCL
573
star
55

functions-framework-dart

FaaS (Function as a service) framework for writing portable Dart functions
Dart
531
star
56

cloud-run-button

Let anyone deploy your GitHub repos to Google Cloud Run with a single click
Go
520
star
57

govanityurls

Use a custom domain in your Go import path
Go
513
star
58

bigquery-oreilly-book

Source code accompanying: BigQuery: The Definitive Guide by Lakshmanan & Tigani to be published by O'Reilly Media
Jupyter Notebook
499
star
59

getting-started-java

Java
478
star
60

ml-on-gcp

Machine Learning on Google Cloud Platform
Python
476
star
61

ipython-soccer-predictions

Sample iPython notebook with soccer predictions
Jupyter Notebook
473
star
62

covid-19-open-data

Datasets of daily time-series data related to COVID-19 for over 20,000 distinct locations around the world.
Python
470
star
63

ai-platform-samples

Official Repo for Google Cloud AI Platform. Find samples for Vertex AI, Google Cloud's new unified ML platform at: https://github.com/GoogleCloudPlatform/vertex-ai-samples
Jupyter Notebook
453
star
64

practical-ml-vision-book

Jupyter Notebook
441
star
65

gradle-appengine-templates

Freemarker based templates that build with the gradle-appengine-plugin
439
star
66

distributed-load-testing-using-kubernetes

Distributed load testing using Kubernetes on Google Container Engine
Smarty
438
star
67

terraform-validator

Terraform Validator is not an officially supported Google product; it is a library for conversion of Terraform plan data to CAI Assets. If you have been using terraform-validator directly in the past, we recommend migrating to `gcloud beta terraform vet`.
Go
436
star
68

hackathon-toolkit

GCP Hackathon Toolkit
HTML
434
star
69

monitoring-dashboard-samples

TypeScript
428
star
70

nodejs-docker

The Node.js Docker image used by Google App Engine Flexible.
TypeScript
406
star
71

cloud-ops-sandbox

Cloud Operations Sandbox is an open source collection of tools that helps practitioners to learn O11y and R9y practices from Google and apply them using Cloud Operations suite of tools.
HCL
398
star
72

cloud-code-vscode

Cloud Code for Visual Studio Code: Issues, Documentation and more
392
star
73

k8s-stackdriver

Go
390
star
74

professional-services-data-validator

Utility to compare data between homogeneous or heterogeneous environments to ensure source and target tables match
Python
375
star
75

cloud-code-samples

Code templates to make working with Kubernetes feel like editing and debugging local code.
Java
374
star
76

require-so-slow

`require`s taking too much time? Profile 'em.
TypeScript
373
star
77

functions-framework-go

FaaS (Function as a service) framework for writing portable Go functions
Go
373
star
78

k8s-multicluster-ingress

kubemci: Command line tool to configure L7 load balancers using multiple kubernetes clusters
Go
372
star
79

compute-image-packages

Packages for Google Compute Engine Linux images.
Python
370
star
80

healthcare

Python
367
star
81

android-docs-samples

Java
365
star
82

stackdriver-errors-js

Client-side JavaScript exception reporting library for Cloud Error Reporting
JavaScript
358
star
83

google-cloud-iot-arduino

Google Cloud IOT Example on ESP8266
C++
340
star
84

istio-samples

Istio demos and sample applications for GCP
Shell
331
star
85

ios-docs-samples

iOS samples that demonstrate APIs and services of Google Cloud Platform.
Swift
325
star
86

mlops-with-vertex-ai

An end-to-end example of MLOps on Google Cloud using TensorFlow, TFX, and Vertex AI
Jupyter Notebook
317
star
87

cloud-code-intellij

Plugin to support the Google Cloud Platform in IntelliJ IDEA - Docs and Issues Repository
315
star
88

gcping

The source for the CLI and web app at gcping.com
Go
303
star
89

spring-cloud-gcp

New home for Spring Cloud GCP development starting with version 2.0.
Java
299
star
90

airflow-operator

Kubernetes custom controller and CRDs to managing Airflow
Go
296
star
91

security-analytics

Community Security Analytics provides a set of community-driven audit & threat queries for Google Cloud
Python
289
star
92

elixir-samples

A collection of samples on using Elixir with Google Cloud Platform.
Elixir
289
star
93

gke-networking-recipes

Shell
286
star
94

datalab-samples

Jupyter Notebook
281
star
95

compute-archlinux-image-builder

A tool to build a Arch Linux Image for GCE
Shell
280
star
96

solutions-terraform-cloudbuild-gitops

HCL
276
star
97

kotlin-samples

Kotlin
276
star
98

gcpdiag

gcpdiag is a command-line diagnostics tool for GCP customers.
Python
268
star
99

PerfKitExplorer

PerfKit Explorer is a dashboarding and performance analysis tool built with Google technologies and easily extensible. PerfKit Explorer is licensed under the Apache 2 license terms. Please make sure to read, understand and agree to the terms of the LICENSE and CONTRIBUTING files before proceeding.
JavaScript
268
star
100

kube-jenkins-imager

Shell
261
star