• Stars
    star
    1,885
  • Rank 24,595 (Top 0.5 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 10 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

PerfKit Benchmarker (PKB) contains a set of benchmarks to measure and compare cloud offerings. The benchmarks use default settings to reflect what most users will see. PerfKit Benchmarker is licensed under the Apache 2 license terms. Please make sure to read, understand and agree to the terms of the LICENSE and CONTRIBUTING files before proceeding.

PerfKit Benchmarker

PerfKit Benchmarker is an open effort to define a canonical set of benchmarks to measure and compare cloud offerings. It's designed to operate via vendor provided command line tools. The benchmark default settings are not tuned for any particular platform or instance type. These settings are recommended for consistency across services. Only in the rare case where there is a common practice like setting the buffer pool size of a database do we change any settings.

This README is designed to give you the information you need to get running with the benchmarker and the basics of working with the code. The wiki contains more detailed information:

Licensing

PerfKit Benchmarker provides wrappers and workload definitions around popular benchmark tools. We made it very simple to use and automate everything we can. It instantiates VMs on the Cloud provider of your choice, automatically installs benchmarks, and runs the workloads without user interaction.

Due to the level of automation you will not see prompts for software installed as part of a benchmark run. Therefore you must accept the license of each of the benchmarks individually, and take responsibility for using them before you use the PerfKit Benchmarker.

Moving forward, you will need to run PKB with the explicit flag --accept-licenses.

In its current release these are the benchmarks that are executed:

Some of the benchmarks invoked require Java. You must also agree with the following license:

SPEC CPU2006 benchmark setup cannot be automated. SPEC requires that users purchase a license and agree with their terms and conditions. PerfKit Benchmarker users must manually download cpu2006-1.2.iso from the SPEC website, save it under the perfkitbenchmarker/data folder (e.g. ~/PerfKitBenchmarker/perfkitbenchmarker/data/cpu2006-1.2.iso), and also supply a runspec cfg file (e.g. ~/PerfKitBenchmarker/perfkitbenchmarker/data/linux64-x64-gcc47.cfg). Alternately, PerfKit Benchmarker can accept a tar file that can be generated with the following steps:

  • Extract the contents of cpu2006-1.2.iso into a directory named cpu2006
  • Run cpu2006/install.sh
  • Copy the cfg file into cpu2006/config
  • Create a tar file containing the cpu2006 directory, and place it under the perfkitbenchmarker/data folder (e.g. ~/PerfKitBenchmarker/perfkitbenchmarker/data/cpu2006v1.2.tgz).

PerfKit Benchmarker will use the tar file if it is present. Otherwise, it will search for the iso and cfg files.

Installation and Setup

Before you can run the PerfKit Benchmarker, you need account(s) on the cloud provider(s) you want to benchmark (see providers). You also need the software dependencies, which are mostly command line tools and credentials to access your accounts without a password. The following steps should help you get up and running with PKB.

Python 3

The recommended way to install and run PKB is in a virtualenv with the latest version of Python 3 (at least Python 3.9). Most Linux distributions and recent Mac OS X versions already have Python 3 installed at /usr/bin/python3.

If Python is not installed, you can likely install it using your distribution's package manager, or see the Python Download page.

python3 -m venv $HOME/my_virtualenv
source $HOME/my_virtualenv/bin/activate

Install PerfKit Benchmarker

Download the latest PerfKit Benchmarker release from GitHub. You can also clone the working version with:

$ cd $HOME
$ git clone https://github.com/GoogleCloudPlatform/PerfKitBenchmarker.git

Install Python library dependencies:

$ pip3 install -r $HOME/PerfKitBenchmarker/requirements.txt

You may need to install additional dependencies depending on the cloud provider you are using. For example, for AWS:

$ cd $HOME/PerfKitBenchmarker/perfkitbenchmarker/providers/aws
$ pip3 install -r requirements.txt

Preprovisioned data

Some benchmarks may require data to be preprovisioned in a cloud. To preprovision data, you will need to obtain the data and then upload it to that cloud. See more information below about which benchmarks require preprovisioned data and how to upload it to different clouds.

Running a Single Benchmark

PerfKit Benchmarker can run benchmarks both on Cloud Providers (GCP, AWS, Azure, DigitalOcean) as well as any "machine" you can SSH into.

Example run on GCP

$ ./pkb.py --project=<GCP project ID> --benchmarks=iperf --machine_type=f1-micro

Example run on AWS

$ cd PerfKitBenchmarker
$ ./pkb.py --cloud=AWS --benchmarks=iperf --machine_type=t2.micro

Example run on Azure

$ ./pkb.py --cloud=Azure --machine_type=Standard_A0 --benchmarks=iperf

Example run on IBMCloud

$ ./pkb.py --cloud=IBMCloud --machine_type=cx2-4x8 --benchmarks=iperf

Example run on AliCloud

$ ./pkb.py --cloud=AliCloud --machine_type=ecs.s2.large --benchmarks=iperf

Example run on DigitalOcean

$ ./pkb.py --cloud=DigitalOcean --machine_type=16gb --benchmarks=iperf

Example run on OpenStack

$ ./pkb.py --cloud=OpenStack --machine_type=m1.medium \
           --openstack_network=private --benchmarks=iperf

Example run on Kubernetes

$ ./pkb.py --cloud=Kubernetes --benchmarks=iperf --kubectl=/path/to/kubectl --kubeconfig=/path/to/kubeconfig --image=image-with-ssh-server  --ceph_monitors=10.20.30.40:6789,10.20.30.41:6789

Example run on Mesos

$ ./pkb.py --cloud=Mesos --benchmarks=iperf --marathon_address=localhost:8080 --image=image-with-ssh-server

Example run on CloudStack

./pkb.py --cloud=CloudStack --benchmarks=ping --cs_network_offering=DefaultNetworkOffering

Example run on Rackspace

$ ./pkb.py --cloud=Rackspace --machine_type=general1-2 --benchmarks=iperf

Example run on ProfitBricks

$ ./pkb.py --cloud=ProfitBricks --machine_type=Small --benchmarks=iperf

How to Run Windows Benchmarks

Install all dependencies as above and ensure that smbclient is installed on your system if you are running on a linux controller:

$ which smbclient
/usr/bin/smbclient

Now you can run Windows benchmarks by running with --os_type=windows. Windows has a different set of benchmarks than Linux does. They can be found under perfkitbenchmarker/windows_benchmarks/. The target VM OS is Windows Server 2012 R2.

How to Run Benchmarks with Juju

Juju is a service orchestration tool that enables you to quickly model, configure, deploy and manage entire cloud environments. Supported benchmarks will deploy a Juju-modeled service automatically, with no extra user configuration required, by specifying the --os_type=juju flag.

Example

$ ./pkb.py --cloud=AWS --os_type=juju --benchmarks=cassandra_stress

Benchmark support

Benchmark/Package authors need to implement the JujuInstall() method inside their package. This method deploys, configures, and relates the services to be benchmarked. Please note that other software installation and configuration should be bypassed when FLAGS.os_type == JUJU. See perfkitbenchmarker/linux_packages/cassandra.py for an example implementation.

How to Run All Standard Benchmarks

Run with --benchmarks="standard_set" and every benchmark in the standard set will run serially which can take a couple of hours. Additionally, if you don't specify --cloud=..., all benchmarks will run on the Google Cloud Platform.

How to Run All Benchmarks in a Named Set

Named sets are are groupings of one or more benchmarks in the benchmarking directory. This feature allows parallel innovation of what is important to measure in the Cloud, and is defined by the set owner. For example the GoogleSet is maintained by Google, whereas the StanfordSet is managed by Stanford. Once a quarter a meeting is held to review all the sets to determine what benchmarks should be promoted to the standard_set. The Standard Set is also reviewed to see if anything should be removed. To run all benchmarks in a named set, specify the set name in the benchmarks parameter (e.g., --benchmarks="standard_set"). Sets can be combined with individual benchmarks or other named sets.

Useful Global Flags

The following are some common flags used when configuring PerfKit Benchmarker.

Flag Notes
--helpmatch=pkb see all global flags
--helpmatch=hpcc see all flags associated with the hpcc benchmark. You can substitute any benchmark name to see the associated flags.
--benchmarks A comma separated list of benchmarks or benchmark sets to run such as --benchmarks=iperf,ping . To see the full list, run ./pkb.py --helpmatch=benchmarks | grep perfkitbenchmarker
--cloud Cloud where the benchmarks are run. See the table below for choices.
--machine_type Type of machine to provision if pre-provisioned machines are not used. Most cloud providers accept the names of pre-defined provider-specific machine types (for example, GCP supports --machine_type=n1-standard-8 for a GCE n1-standard-8 VM). Some cloud providers support YAML expressions that match the corresponding VM spec machine_type property in the YAML configs (for example, GCP supports --machine_type="{cpus: 1, memory: 4.5GiB}" for a GCE custom VM with 1 vCPU and 4.5GiB memory). Note that the value provided by this flag will affect all provisioned machines; users who wish to provision different machine types for different roles within a single benchmark run should use the YAML configs for finer control.
--zones This flag allows you to override the default zone. See the table below.
--data_disk_type Type of disk to use. Names are provider-specific, but see table below.

The default cloud is 'GCP', override with the --cloud flag. Each cloud has a default zone which you can override with the --zones flag, the flag supports the same values that the corresponding Cloud CLIs take:

Cloud name Default zone Notes
GCP us-central1-a
AWS us-east-1a
Azure eastus2
IBMCloud us-south-1
AliCloud West US
DigitalOcean sfo1 You must use a zone that supports the features 'metadata' (for cloud config) and 'private_networking'.
OpenStack nova
CloudStack QC-1
Rackspace IAD OnMetal machine-types are available only in IAD zone
Kubernetes k8s
ProfitBricks AUTO Additional zones: ZONE_1, ZONE_2, or ZONE_3

Example:

./pkb.py --cloud=GCP --zones=us-central1-a --benchmarks=iperf,ping

The disk type names vary by provider, but the following table summarizes some useful ones. (Many cloud providers have more disk types beyond these options.)

Cloud name Network-attached SSD Network-attached HDD
GCP pd-ssd pd-standard
AWS gp3 st1
Azure Premium_LRS Standard_LRS
Rackspace cbs-ssd cbs-sata

Also note that --data_disk_type=local tells PKB not to allocate a separate disk, but to use whatever comes with the VM. This is useful with AWS instance types that come with local SSDs, or with the GCP --gce_num_local_ssds flag.

If an instance type comes with more than one disk, PKB uses whichever does not hold the root partition. Specifically, on Azure, PKB always uses /dev/sdb as its scratch device.

Proxy configuration for VM guests.

If the VM guests do not have direct Internet access in the cloud environment, you can configure proxy settings through pkb.py flags.

To do that simple setup three flags (All urls are in notation ): The flag values use the same <protocol>://<server>:<port> syntax as the corresponding environment variables, for example --http_proxy=http://proxy.example.com:8080 .

Flag Notes
--http_proxy Needed for package manager on Guest OS and for some Perfkit packages
--https_proxy Needed for package manager or Ubuntu guest and for from github downloaded packages
--ftp_proxy Needed for some Perfkit packages

Preprovisioned Data

As mentioned above, some benchmarks require preprovisioned data. This section describes how to preprovision this data.

Benchmarks with Preprovisioned Data

Sample Preprovision Benchmark

This benchmark demonstrates the use of preprovisioned data. Create the following file to upload using the command line:

echo "1234567890" > preprovisioned_data.txt

To upload, follow the instructions below with a filename of preprovisioned_data.txt and a benchmark name of sample.

Clouds with Preprovisioned Data

Google Cloud

To preprovision data on Google Cloud, you will need to upload each file to Google Cloud Storage using gsutil. First, you will need to create a storage bucket that is accessible from VMs created in Google Cloud by PKB. Then copy each file to this bucket using the command

gsutil cp <filename> gs://<bucket>/<benchmark-name>/<filename>

To run a benchmark on Google Cloud that uses the preprovisioned data, use the flag --gcp_preprovisioned_data_bucket=<bucket>.

AWS

To preprovision data on AWS, you will need to upload each file to S3 using the AWS CLI. First, you will need to create a storage bucket that is accessible from VMs created in AWS by PKB. Then copy each file to this bucket using the command

aws s3 cp <filename> s3://<bucket>/<benchmark-name>/<filename>

To run a benchmark on AWS that uses the preprovisioned data, use the flag --aws_preprovisioned_data_bucket=<bucket>.

Configurations and Configuration Overrides

Each benchmark now has an independent configuration which is written in YAML. Users may override this default configuration by providing a configuration. This allows for much more complex setups than previously possible, including running benchmarks across clouds.

A benchmark configuration has a somewhat simple structure. It is essentially just a series of nested dictionaries. At the top level, it contains VM groups. VM groups are logical groups of homogenous machines. The VM groups hold both a vm_spec and a disk_spec which contain the parameters needed to create members of that group. Here is an example of an expanded configuration:

hbase_ycsb:
  vm_groups:
    loaders:
      vm_count: 4
      vm_spec:
        GCP:
          machine_type: n1-standard-1
          image: ubuntu-16-04
          zone: us-central1-c
        AWS:
          machine_type: m3.medium
          image: ami-######
          zone: us-east-1a
        # Other clouds here...
      # This specifies the cloud to use for the group. This allows for
      # benchmark configurations that span clouds.
      cloud: AWS
      # No disk_spec here since these are loaders.
    master:
      vm_count: 1
      cloud: GCP
      vm_spec:
        GCP:
          machine_type:
            cpus: 2
            memory: 10.0GiB
          image: ubuntu-16-04
          zone: us-central1-c
        # Other clouds here...
      disk_count: 1
      disk_spec:
        GCP:
          disk_size: 100
          disk_type: standard
          mount_point: /scratch
        # Other clouds here...
    workers:
      vm_count: 4
      cloud: GCP
      vm_spec:
        GCP:
          machine_type: n1-standard-4
          image: ubuntu-16-04
          zone: us-central1-c
        # Other clouds here...
      disk_count: 1
      disk_spec:
        GCP:
          disk_size: 500
          disk_type: remote_ssd
          mount_point: /scratch
        # Other clouds here...

For a complete list of keys for vm_specs and disk_specs see virtual_machine.BaseVmSpec and disk.BaseDiskSpec and their derived classes.

User configs are applied on top of the existing default config and can be specified in two ways. The first is by supplying a config file via the --benchmark_config_file flag. The second is by specifying a single setting to override via the --config_override flag.

A user config file only needs to specify the settings which it is intended to override. For example if the only thing you want to do is change the number of VMs for the cluster_boot benchmark, this config is sufficient:

cluster_boot:
  vm_groups:
    default:
      vm_count: 100

You can achieve the same effect by specifying the --config_override flag. The value of the flag should be a path within the YAML (with keys delimited by periods), an equals sign, and finally the new value:

--config_override=cluster_boot.vm_groups.default.vm_count=100

See the section below for how to use static (i.e. pre-provisioned) machines in your config.

Advanced: How To Run Benchmarks Without Cloud Provisioning (e.g., local workstation)

It is possible to run PerfKit Benchmarker without running the Cloud provisioning steps. This is useful if you want to run on a local machine, or have a benchmark like iperf run from an external point to a Cloud VM.

In order to do this you need to make sure:

  • The static (i.e. not provisioned by PerfKit Benchmarker) machine is ssh'able
  • The user PerfKitBenchmarker will login as has 'sudo' access. (*** Note we hope to remove this restriction soon ***)

Next, you will want to create a YAML user config file describing how to connect to the machine as follows:

static_vms:
  - &vm1 # Using the & character creates an anchor that we can
         # reference later by using the same name and a * character.
    ip_address: 170.200.60.23
    user_name: voellm
    ssh_private_key: /home/voellm/perfkitkeys/my_key_file.pem
    zone: Siberia
    disk_specs:
      - mount_point: /data_dir
  • The ip_address is the address where you want benchmarks to run.
  • ssh_private_key is where to find the private ssh key.
  • zone can be anything you want. It is used when publishing results.
  • disk_specs is used by all benchmarks which use disk (i.e., fio, bonnie++, many others).

In the same file, configure any number of benchmarks (in this case just iperf), and reference the static VM as follows:

iperf:
  vm_groups:
    vm_1:
      static_vms:
        - *vm1

I called my file iperf.yaml and used it to run iperf from Siberia to a GCP VM in us-central1-f as follows:

$ ./pkb.py --benchmarks=iperf --machine_type=f1-micro --benchmark_config_file=iperf.yaml --zones=us-central1-f --ip_addresses=EXTERNAL
  • ip_addresses=EXTERNAL tells PerfKit Benchmarker not to use 10.X (ie Internal) machine addresses that all Cloud VMs have. Just use the external IP address.

If a benchmark requires two machines like iperf, you can have two machines in the same YAML file as shown below. This means you can indeed run between two machines and never provision any VMs in the Cloud.

static_vms:
  - &vm1
    ip_address: <ip1>
    user_name: connormccoy
    ssh_private_key: /home/connormccoy/.ssh/google_compute_engine
    internal_ip: 10.240.223.37
    install_packages: false
  - &vm2
    ip_address: <ip2>
    user_name: connormccoy
    ssh_private_key: /home/connormccoy/.ssh/google_compute_engine
    internal_ip: 10.240.234.189
    ssh_port: 2222

iperf:
  vm_groups:
    vm_1:
      static_vms:
        - *vm2
    vm_2:
      static_vms:
        - *vm1

Specifying Flags in Configuration Files

You can now specify flags in configuration files by using the flags key at the top level in a benchmark config. The expected value is a dictionary mapping flag names to their new default values. The flags are only defaults; it's still possible to override them via the command line. It's even possible to specify conflicting values of the same flag in different benchmarks:

iperf:
  flags:
    machine_type: n1-standard-2
    zone: us-central1-b
    iperf_sending_thread_count: 2

netperf:
  flags:
    machine_type: n1-standard-8

The new defaults will only apply to the benchmark in which they are specified.

Using Elasticsearch Publisher

PerfKit data can optionally be published to an Elasticsearch server. To enable this, the elasticsearch Python package must be installed.

$ pip install elasticsearch

Note: The elasticsearch Python library and Elasticsearch must have matching major versions.

The following are flags used by the Elasticsearch publisher. At minimum, all that is needed is the --es_uri flag.

Flag Notes
--es_uri The Elasticsearch server address and port (e.g. localhost:9200)
--es_index The Elasticsearch index name to store documents (default: perfkit)
--es_type The Elasticsearch document type (default: result)

Note: Amazon ElasticSearch service currently does not support transport on port 9200 therefore you must use endpoint with port 80 eg. search-<ID>.es.amazonaws.com:80 and allow your IP address in the cluster.

Using InfluxDB Publisher

No additional packages need to be installed in order to publish Perfkit data to an InfluxDB server.

InfluxDB Publisher takes in the flags for the Influx uri and the Influx DB name. The publisher will default to the pre-set defaults, identified below, if no uri or DB name is set. However, the user is required to at the very least call the --influx_uri flag to publish data to Influx.

Flag Notes Default
--influx_uri The Influx DB address and port. Expects the format hostname:port localhost:8086
--influx_db_name The name of Influx DB database that you wish to publish to or create perfkit

How to Extend PerfKit Benchmarker

First start with the CONTRIBUTING.md file. It has the basics on how to work with PerfKitBenchmarker, and how to submit your pull requests.

In addition to the CONTRIBUTING.md file we have added a lot of comments into the code to make it easy to:

  • Add new benchmarks (e.g.: --benchmarks=<new benchmark>)
  • Add new package/os type support (e.g.: --os_type=<new os type>)
  • Add new providers (e.g.: --cloud=<new provider>)
  • etc.

Even with lots of comments we make to support more detailed documention. You will find the documentation we have on the wiki. Missing documentation you want? Start a page and/or open an issue to get it added.

Integration Testing

If you wish to run unit or integration tests, ensure that you have tox >= 2.0.0 installed.

In addition to regular unit tests, which are run via hooks/check-everything, PerfKit Benchmarker has integration tests, which create actual cloud resources and take time and money to run. For this reason, they will only run when the variable PERFKIT_INTEGRATION is defined in the environment. The command

$ tox -e integration

will run the integration tests. The integration tests depend on having installed and configured all of the relevant cloud provider SDKs, and will fail if you have not done so.

Planned Improvements

Many... please add new requests via GitHub issues.

More Repositories

1

microservices-demo

Sample cloud-first application with 10 microservices showcasing Kubernetes, Istio, and gRPC.
Go
16,790
star
2

terraformer

CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code
Go
12,352
star
3

training-data-analyst

Labs and demos for courses for GCP Training (http://cloud.google.com/training).
Jupyter Notebook
7,867
star
4

python-docs-samples

Code samples used on cloud.google.com
Jupyter Notebook
7,432
star
5

generative-ai

Sample code and notebooks for Generative AI on Google Cloud, with Gemini on Vertex AI
Jupyter Notebook
6,517
star
6

golang-samples

Sample apps and code written for Google Cloud in the Go programming language.
Go
4,284
star
7

professional-services

Common solutions and tools developed by Google Cloud's Professional Services team. This repository and its contents are not an officially supported Google product.
Python
2,825
star
8

nodejs-docs-samples

Node.js samples for Google Cloud Platform products.
JavaScript
2,807
star
9

tensorflow-without-a-phd

A crash course in six episodes for software developers who want to become machine learning practitioners.
Jupyter Notebook
2,772
star
10

gcsfuse

A user-space file system for interacting with Google Cloud Storage
Go
2,046
star
11

community

Java
1,919
star
12

asl-ml-immersion

This repos contains notebooks for the Advanced Solutions Lab: ML Immersion
Jupyter Notebook
1,799
star
13

vertex-ai-samples

Notebooks, code samples, sample apps, and other resources that demonstrate how to use, develop and manage machine learning and generative AI workflows using Google Cloud Vertex AI.
Jupyter Notebook
1,659
star
14

java-docs-samples

Java and Kotlin Code samples used on cloud.google.com
Java
1,610
star
15

ml-design-patterns

Source code accompanying O'Reilly book: Machine Learning Design Patterns
Jupyter Notebook
1,600
star
16

continuous-deployment-on-kubernetes

Get up and running with Jenkins on Google Kubernetes Engine
Shell
1,582
star
17

cloudml-samples

Cloud ML Engine repo. Please visit the new Vertex AI samples repo at https://github.com/GoogleCloudPlatform/vertex-ai-samples
Python
1,516
star
18

cloud-foundation-fabric

End-to-end modular samples and landing zones toolkit for Terraform on GCP.
HCL
1,509
star
19

localllm

Python
1,505
star
20

cloud-builders

Builder images and examples commonly used for Google Cloud Build
Go
1,374
star
21

cloud-sql-proxy

A utility for connecting securely to your Cloud SQL instances
Go
1,263
star
22

cloud-builders-community

Community-contributed images for Google Cloud Build
Go
1,258
star
23

berglas

A tool for managing secrets on Google Cloud
Go
1,236
star
24

data-science-on-gcp

Source code accompanying book: Data Science on the Google Cloud Platform, Valliappa Lakshmanan, O'Reilly 2017
Jupyter Notebook
1,230
star
25

kubernetes-engine-samples

Sample applications for Google Kubernetes Engine (GKE)
HCL
1,228
star
26

functions-framework-nodejs

FaaS (Function as a service) framework for writing portable Node.js functions
TypeScript
1,162
star
27

DataflowTemplates

Cloud Dataflow Google-provided templates for solving in-Cloud data tasks
Java
1,135
star
28

bigquery-utils

Useful scripts, udfs, views, and other utilities for migration and data warehouse operations in BigQuery.
Java
1,117
star
29

cloud-vision

Sample code for Google Cloud Vision
Python
1,097
star
30

bank-of-anthos

Retail banking sample application showcasing Kubernetes and Google Cloud
Java
994
star
31

buildpacks

Builders and buildpacks designed to run on Google Cloud's container platforms
Go
982
star
32

php-docs-samples

A collection of samples that demonstrate how to call Google Cloud services from PHP.
PHP
961
star
33

cloud-foundation-toolkit

The Cloud Foundation toolkit provides GCP best practices as code.
Go
958
star
34

deploymentmanager-samples

Deployment Manager samples and templates.
Jinja
938
star
35

flask-talisman

HTTP security headers for Flask
Python
896
star
36

k8s-config-connector

GCP Config Connector, a Kubernetes add-on for managing GCP resources
Go
891
star
37

gsutil

A command line tool for interacting with cloud storage services.
Python
874
star
38

DataflowJavaSDK

Google Cloud Dataflow provides a simple, powerful model for building both batch and streaming parallel data processing pipelines.
857
star
39

nodejs-getting-started

A tutorial for creating a complete application using Node.js on Google Cloud Platform
JavaScript
806
star
40

magic-modules

Add Google Cloud Platform support to Terraform
Go
804
star
41

gcr-cleaner

Delete untagged image refs in Google Container Registry or Artifact Registry
Go
802
star
42

keras-idiomatic-programmer

Books, Presentations, Workshops, Notebook Labs, and Model Zoo for Software Engineers and Data Scientists wanting to learn the TF.Keras Machine Learning framework
Jupyter Notebook
797
star
43

metacontroller

Lightweight Kubernetes controllers as a service
Go
790
star
44

awesome-google-cloud

A curated list of awesome stuff for Google Cloud.
777
star
45

mlops-on-gcp

Jupyter Notebook
773
star
46

getting-started-python

Code samples for using Python on Google Cloud Platform
Python
756
star
47

dotnet-docs-samples

.NET code samples used on https://cloud.google.com
C#
736
star
48

click-to-deploy

Source for Google Click to Deploy solutions listed on Google Cloud Marketplace.
Python
729
star
49

iap-desktop

IAP Desktop is a Windows application that provides zero-trust Remote Desktop and SSH access to Linux and Windows VMs on Google Cloud.
C#
708
star
50

cloud-sdk-docker

Google Cloud CLI Docker Image - Docker Image containing the gcloud CLI and its bundled components.
Dockerfile
697
star
51

tf-estimator-tutorials

This repository includes tutorials on how to use the TensorFlow estimator APIs to perform various ML tasks, in a systematic and standardised way
Jupyter Notebook
671
star
52

functions-framework-python

FaaS (Function as a service) framework for writing portable Python functions
Python
670
star
53

flink-on-k8s-operator

[DEPRECATED] Kubernetes operator for managing the lifecycle of Apache Flink and Beam applications.
Go
657
star
54

terraform-google-examples

Collection of examples for using Terraform with Google Cloud Platform.
HCL
573
star
55

functions-framework-dart

FaaS (Function as a service) framework for writing portable Dart functions
Dart
535
star
56

cloud-run-button

Let anyone deploy your GitHub repos to Google Cloud Run with a single click
Go
527
star
57

bigquery-oreilly-book

Source code accompanying: BigQuery: The Definitive Guide by Lakshmanan & Tigani to be published by O'Reilly Media
Jupyter Notebook
523
star
58

govanityurls

Use a custom domain in your Go import path
Go
518
star
59

ml-on-gcp

Machine Learning on Google Cloud Platform
Python
484
star
60

practical-ml-vision-book

Jupyter Notebook
482
star
61

getting-started-java

Java
478
star
62

ipython-soccer-predictions

Sample iPython notebook with soccer predictions
Jupyter Notebook
473
star
63

monitoring-dashboard-samples

Google Cloud Monitoring Dashboard Samples
TypeScript
471
star
64

covid-19-open-data

Datasets of daily time-series data related to COVID-19 for over 20,000 distinct locations around the world.
Python
471
star
65

ai-platform-samples

Official Repo for Google Cloud AI Platform. Find samples for Vertex AI, Google Cloud's new unified ML platform at: https://github.com/GoogleCloudPlatform/vertex-ai-samples
Jupyter Notebook
457
star
66

hackathon-toolkit

GCP Hackathon Toolkit
HTML
440
star
67

gradle-appengine-templates

Freemarker based templates that build with the gradle-appengine-plugin
439
star
68

distributed-load-testing-using-kubernetes

Distributed load testing using Kubernetes on Google Container Engine
Smarty
438
star
69

terraform-validator

Terraform Validator is not an officially supported Google product; it is a library for conversion of Terraform plan data to CAI Assets. If you have been using terraform-validator directly in the past, we recommend migrating to `gcloud beta terraform vet`.
Go
437
star
70

cloud-code-vscode

Cloud Code for Visual Studio Code: Issues, Documentation and more
416
star
71

nodejs-docker

The Node.js Docker image used by Google App Engine Flexible.
TypeScript
407
star
72

cloud-ops-sandbox

Cloud Operations Sandbox is an open source collection of tools that helps practitioners to learn O11y and R9y practices from Google and apply them using Cloud Operations suite of tools.
HCL
405
star
73

professional-services-data-validator

Utility to compare data between homogeneous or heterogeneous environments to ensure source and target tables match
Python
403
star
74

k8s-stackdriver

Go
390
star
75

cloud-code-samples

Code templates to make working with Kubernetes feel like editing and debugging local code.
Java
387
star
76

healthcare

Python
374
star
77

require-so-slow

`require`s taking too much time? Profile 'em.
TypeScript
373
star
78

functions-framework-go

FaaS (Function as a service) framework for writing portable Go functions
Go
373
star
79

k8s-multicluster-ingress

kubemci: Command line tool to configure L7 load balancers using multiple kubernetes clusters
Go
372
star
80

compute-image-packages

Packages for Google Compute Engine Linux images.
Python
370
star
81

android-docs-samples

Java
365
star
82

stackdriver-errors-js

Client-side JavaScript exception reporting library for Cloud Error Reporting
JavaScript
358
star
83

applied-ai-engineering-samples

This repository compiles code samples and notebooks demonstrating how to use Generative AI on Google Cloud Vertex AI.
Jupyter Notebook
344
star
84

mlops-with-vertex-ai

An end-to-end example of MLOps on Google Cloud using TensorFlow, TFX, and Vertex AI
Jupyter Notebook
343
star
85

google-cloud-iot-arduino

Google Cloud IOT Example on ESP8266
C++
340
star
86

istio-samples

Istio demos and sample applications for GCP
Shell
331
star
87

ios-docs-samples

iOS samples that demonstrate APIs and services of Google Cloud Platform.
Swift
325
star
88

cloud-code-intellij

Plugin to support the Google Cloud Platform in IntelliJ IDEA - Docs and Issues Repository
319
star
89

security-analytics

Community Security Analytics provides a set of community-driven audit & threat queries for Google Cloud
Python
315
star
90

gke-networking-recipes

Shell
307
star
91

gcping

The source for the CLI and web app at gcping.com
Go
303
star
92

solutions-terraform-cloudbuild-gitops

HCL
301
star
93

spring-cloud-gcp

New home for Spring Cloud GCP development starting with version 2.0.
Java
299
star
94

airflow-operator

Kubernetes custom controller and CRDs to managing Airflow
Go
296
star
95

genai-for-marketing

Showcasing Google Cloud's generative AI for marketing scenarios via application frontend, backend, and detailed, step-by-step guidance for setting up and utilizing generative AI tools, including examples of their use in crafting marketing materials like blog posts and social media content, nl2sql analysis, and campaign personalization.
Jupyter Notebook
296
star
96

elixir-samples

A collection of samples on using Elixir with Google Cloud Platform.
Elixir
291
star
97

gcpdiag

gcpdiag is a command-line diagnostics tool for GCP customers.
Python
288
star
98

kotlin-samples

Kotlin
285
star
99

compute-archlinux-image-builder

A tool to build a Arch Linux Image for GCE
Shell
284
star
100

datalab-samples

Jupyter Notebook
281
star