• Stars
    star
    482
  • Rank 91,212 (Top 2 %)
  • Language
    Jupyter Notebook
  • License
    Apache License 2.0
  • Created about 4 years ago
  • Updated 8 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Practical Machine Learning for Computer Vision

Open-sourced code from the O'Reilly book Practical Machine Learning for Computer Vision by Valliappa Lakshmanan, Martin Gorner, Ryan Gillard

** This is not an official Google product **

Color images

Unfortunately, the print version of the book is not in color. For your convenience, all the images from the book can be found in the images folder of this repository.

Quick tour through the book

For a full tour of the book, see Full Tour (below)

Machine learning on images is revolutionizing healthcare, manufacturing, retail, and many other sectors. Many previously difficult problems can now be solved by training machine learning models to identify objects in images. Our aim in the book Practical Machine Learning for Computer Vision was to provide intuitive explanations of the ML architectures that underpin this fast-advancing field, and to provide practical code to employ these ML models to solve practical problems involving classification, measurement, detection, segmentation, representation, generation, counting, and more.

Image classification is the “hello world” of deep learning. Therefore, this codelab also provides a practical end-to-end introduction to deep learning. It can serve as a stepping stone to other deep learning domains such as natural language processing. For more details, of course, we encourage you to read the book.

What you’ll build

In this quick tour, you’ll build an end-to-end machine learning model from the book’s GitHub repository for image understanding using Google Cloud Vertex AI. We will show you how to:

  • Start a Vertex AI Notebook
  • Prepare the 5-flowers dataset
  • Train a Transfer Learning EfficientNet model to classify flowers
  • Deploy the model
  • Explain its predictions
  • Invoke the model from a streaming pipeline.

We recommend creating a brand new GCP project to try these out. Then, delete the project when you are done, to make sure that all resources have been deleted.

1. Setup a Vertex AI Workbench instance

Ensure that you have GPU quota

Visit the GCP console at https://console.cloud.google.com/ and navigate to IAM & Admin | Quotas. You can also navigate to it directly by visiting https://console.cloud.google.com/google.com/iam-admin/quotas

In the Filter, start typing Nvidia and choose NVIDIA T4 GPUs. Make sure you have a region with a limit greater than zero. If not, please request a quota increase.

Note: If you want, you can do this lab with only a CPU and not a GPU. Training will take longer. Just choose the non-GPU option in the next step.

Navigate to Notebook creation part of GCP console

Visit the GCP console at https://console.cloud.google.com/ and navigate to Vertex AI | Workbench. You can also navigate to it directly by visiting https://console.cloud.google.com/vertex-ai/workbench

Click on +New Instance at the top of the page. Then, select TensorFlow Enterprise 2.6 with Tesla T4.

Create a Notebook instance

Name the instance mlvision-book-gpu

Click on the checkbox to install the Nvidia driver automatically. Make sure to check the box to install the Nvidia driver. If you missed it, delete the instance and start again.

Click Create to accept the other defaults.

This step will take about 10 minutes.

Clone the book’s code repository

Click on the link to Open JupyterLab

In JupyterLab, click on the git clone button (the right-most button at the top of the left panel). In the textbox, type in: https://github.com/GoogleCloudPlatform/practical-ml-vision-book Note: An alternative way to clone the repository is to launch a Terminal and then type: git clone https://github.com/GoogleCloudPlatform/practical-ml-vision-book

You may encounter an out of memory error with GPU if you execute multiple notebooks with Vertex AI Notebook. To avoid it, select "Shut Down All Kernels..." from the Kernel menu before opening a new notebook.

2. Train a Transfer Learning model

This notebook contains the core machine learning to do image classification. We will improve on this end-to-end workflow in later steps.

Open notebook

Navigate to practical-ml-vision-book/03_image_models/03a_transfer_learning.ipynb

Clear cells

Clear cells by selecting Edit | Clear All Outputs

Run cells

Run cells one-by-one. Read the cell. Then, execute it by clicking Shift + Enter

3. Prepare ML datasets [Optional]

In this step, you will create training, validation, and test datasets that consist of data that has been prepared to make ML more efficient. The data will be written out as TensorFlow Records.

Open notebook

Navigate to practical-ml-vision-book/05_create_dataset/05_split_tfrecord.ipynb

Create a Cloud Storage bucket

In a separate browser window, navigate to the Storage section of the GCP console: https://console.cloud.google.com/storage/browser and create a bucket. The console will not allow you to create a bucket with a name that already exists.

The bucket should be in the same region as your notebook instance.

Configure the Dataflow job

Skip to the bottom of the notebook and find the (last-but-one) cell that contains the line python -m jpeg_to_tfrecord

Change the BUCKET setting to reflect the name of the bucket you created in the previous step. For example, you might set it to be: BUCKET=abc-12345

Run Dataflow job

Run the cell by clicking Shift + Enter

Monitor Dataflow job

View the progress of the Dataflow job by navigating to the GCP console section for Dataflow: https://console.cloud.google.com/dataflow/jobs When the job completes, you will see 3 datasets created in the bucket.

Note: This job will take about 20 minutes to complete, so we will do the next step starting from an already created dataset in the bucket gs://practical-ml-vision-book/

4. Train and export a SavedModel

In this step, you will train a transfer learning model on data contained in TensorFlow Records. Then, you will export the trained model in SavedModel format to a local directory named export/flowers_model.

Open notebook

Navigate to practical-ml-vision-book/07_training/07c_export.ipynb

Clear cells

Clear cells by selecting Edit | Clear All Outputs

Note: By default, this notebook trains on the complete dataset and will take about 5 minutes on a GPU, but take considerably longer on a CPU. If you are using a CPU and not a GPU, change the PATTERN_SUFFIX to process only the first (00, 01) shards and to train for only 3 epochs. The resulting model will not be very accurate but it will allow you to proceed to the next step in a reasonable amount of time. You can make this change in the first cell of the “Training” section of the notebook.

Run cells

Run cells one-by-one. Read the cell. Then, execute it by clicking Shift + Enter

5. Deploy model to Vertex AI

In this step, you will deploy the model as a REST web service on Vertex AI, and try out online and batch predictions as well as streaming predictions.

Open notebook

Navigate to practical-ml-vision-book/09_deploying/09b_rest.ipynb

Clear cells

Clear cells by selecting Edit | Clear All Outputs

Run cells

Run cells one-by-one. Read the cell. Then, execute it by clicking Shift + Enter

6. Create an ML Pipeline

In this step, you will deploy the end-to-end ML workflow as an ML Pipeline so that you can run repeatable experiments easily.

Because Vertex AI Pipeline is still in preview, you will create pipelines that run OSS Kubeflow Pipelines on GKE.

Launch Kubeflow Pipelines on GKE

Browse to https://console.cloud.google.com/ai-platform/pipelines/clusters and click on New Instance.

Create GKE cluster

In the Marketplace, click Configure.

Click Create a new cluster.

Check the box to allow access to Cloud APIs

Make sure the region is correct.

Click Create cluster. This will take about 5 minutes.

Deploy Kubeflow on the cluster

Change the app instance name to mlvision-book

Click Deploy. This will take about 5 minutes.

Note Kubeflow Host ID

In the AI Platform Pipelines section of the console (you may need to click Refresh), click on Settings and note the Kubeflow Host ID. It will be something like https://40e09ee3a33a422-dot-us-central1.pipelines.googleusercontent.com

Open notebook

Navigate to practical-ml-vision-book/10_mlops/10a_mlpipeline.ipynb

Install Kubeflow

Run the first cell to pip install kfp.

Then, restart the kernel using the button on the ribbon at the top of the notebook.

In the second cell, change the KFPHOST variable to the hostname you noted down from the AI Platform Pipelines SDK settings. Clear cells

Clear cells by selecting Edit | Clear All Outputs

Run cells

Run cells one-by-one. Read the cell. Then, execute it by clicking Shift + Enter

Click on the generated Run details link.

Wait for the workflow to complete.

Congratulations

Congratulations, you've successfully built an end-to-end machine learning model for image classification.

Full tour of book

For a shorter exploration, see Quick Tour (above)

We recommend creating a brand new GCP project to try these out. Then, delete the project when you are done, to make sure that all resources have been deleted.

1. Ensure that you have GPU quota

Visit the GCP console at https://console.cloud.google.com/ and navigate to IAM & Admin | Quotas. You can also navigate to it directly by visiting https://console.cloud.google.com/google.com/iam-admin/quotas

In the Filter, start typing Nvidia and choose NVIDIA T4 GPUs. Make sure you have a region with a limit greater than zero. If not, please request a quota increase.

2. Navigate to Vertex Workbench creation part of GCP console

Visit the GCP console at https://console.cloud.google.com/ and navigate to Vertex AI | Vertex Workbench. You can also navigate to it directly by visiting https://console.cloud.google.com/vertex-ai/workbench/

Click on +New Instance at the top of the page. Then, select the TensorFlow Enterprise 2.6 with Nvidia Tesla T4.

3. Create a Notebook instance

Name the instance mlvision-book-gpu

Click on the checkbox to install the Nvidia driver automatically. Make sure to check the box to install the Nvidia driver. If you missed it, delete the instance and start again.

Click on Advanced

Change Machine Type to n1-highmem-4

Change GPU Type to Nvidia Tesla T4

Change Disk | Data Disk Type to 300 GB

Change Permission | Single User | your email address

Click Create to accept the other defaults.

This step will take about 10 minutes.

4. Create a Cloud Storage bucket

Navigate to the Storage section of the GCP console: https://console.cloud.google.com/storage/browser and create a bucket. The console will not allow you to create a bucket with a name that already exists. The bucket should be in the same region as your notebook instance.

5. Clone the book’s code repository

Go to the Vertex Workbench section of the GCP console. Click on the link to Open JupyterLab

In JupyterLab, click on the git clone button (the right-most button at the top of the left panel). In the textbox, type in: https://github.com/GoogleCloudPlatform/practical-ml-vision-book Note: An alternative way to clone the repository is to launch a Terminal and then type: git clone https://github.com/GoogleCloudPlatform/practical-ml-vision-book

6. Run through the notebooks

  • In JupyterLab, navigate to the folder practical-ml-vision-book/02_ml_models

  • Open the notebook 02a.

    • Edit | Clear All Outputs
    • Read and run each cell one-by-one by typing Shift + Enter. (or click Run | Restart Kernel and Run All Cells)
    • Go to the list of running Terminals and Kernels (the second button from the top on the extreme left of JupyterLab)
    • Stop the 02a notebook. Stop the Kernel every time you finish running a notebook. Otherwise, you will run out of memory.
  • Now, open and run notebook 02b, and repeat steps listed above.

  • In Chapter 3, run only the flowers5 notebooks (3a and 3b on MobileNet).

    • Run 3a_transfer_learning
    • Run 3b_finetune_MOBILENETV2_flowers5 -- note that if AdamW is not found, you may have to restart the kernel. See instructions in notebook.
    • Many of the flowers104 notebooks will require a more powerful machine. We did these notebooks using TPUs. See README_TPU.md for details. You can try adding more GPUs if you don't have access to TPUs but this has not been tested.
  • In Chapter 4

    • Unet segmentation will work on a T4.
    • Works in TensorFlow 2.7+ only Follow the Readme directions in the directory to try out RetinaNet. You'll need a high-bandwidth internet connection to download and upload the 12 GB dataset. Also, you need to create a new Workbench instance with TensorFlow 2.7+ (not 2.6).
  • In Chapter 5, you can run the notebooks in any order.

  • In Chapter 6:

    • Change the BUCKET variable in run_dataflow.sh
    • Run the notebooks in order.
    • 6h, the TF Transform notebook, is broken (most likely a Python dependency problem)
  • In Chapter 7, run the notebooks in order.

    • In 7c, make sure to change the BUCKET where marked.
  • In Chapter 9, run the notebooks in order.

    • Make sure to change ENDPOINT_ID, BUCKET, etc. to reflect your environment.
  • In Chapter 10:

    • Start a Kubeflow Pipelines Cluster by visiting https://console.cloud.google.com/marketplace/product/google-cloud-ai-platform/kubeflow-pipelines
    • Make sure to allow access to Cloud Platform APIs from the cluster
    • Once cluster has been started, click "Deploy"
    • Once deployed, click on the link to go to the Kubeflow Pipelines console and look at the Settings
    • Note the HOST string passed
    • In JupyterLab, edit the KFPHOST variable in 10a to reflect the cluster that you just started
    • Run 10a and 10b
  • In Chapter 11, run the notebooks in order.

  • In Chapter 12, run the notebooks in order.

Common issues that readers run into

  • Out of memory. Make sure that you have shut down all previous notebooks.
  • AdamW not found. Make sure that you restart the kernel when you start the notebook. AdamW has to be imported before first TensorFlow call.
  • Bucket permissions problem Make sure to change BUCKET variable to something you own.
  • Weird GPU errors Most likely, the GPU is out of memory. Please shut down other notebooks, restart kernel, and try again.

Feedback? Please file an Issue in the GitHub repo.

More Repositories

1

microservices-demo

Sample cloud-first application with 10 microservices showcasing Kubernetes, Istio, and gRPC.
Go
16,790
star
2

terraformer

CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code
Go
12,352
star
3

training-data-analyst

Labs and demos for courses for GCP Training (http://cloud.google.com/training).
Jupyter Notebook
7,867
star
4

python-docs-samples

Code samples used on cloud.google.com
Jupyter Notebook
7,432
star
5

generative-ai

Sample code and notebooks for Generative AI on Google Cloud, with Gemini on Vertex AI
Jupyter Notebook
6,517
star
6

golang-samples

Sample apps and code written for Google Cloud in the Go programming language.
Go
4,284
star
7

professional-services

Common solutions and tools developed by Google Cloud's Professional Services team. This repository and its contents are not an officially supported Google product.
Python
2,825
star
8

nodejs-docs-samples

Node.js samples for Google Cloud Platform products.
JavaScript
2,807
star
9

tensorflow-without-a-phd

A crash course in six episodes for software developers who want to become machine learning practitioners.
Jupyter Notebook
2,772
star
10

gcsfuse

A user-space file system for interacting with Google Cloud Storage
Go
2,046
star
11

community

Java
1,919
star
12

PerfKitBenchmarker

PerfKit Benchmarker (PKB) contains a set of benchmarks to measure and compare cloud offerings. The benchmarks use default settings to reflect what most users will see. PerfKit Benchmarker is licensed under the Apache 2 license terms. Please make sure to read, understand and agree to the terms of the LICENSE and CONTRIBUTING files before proceeding.
Python
1,885
star
13

asl-ml-immersion

This repos contains notebooks for the Advanced Solutions Lab: ML Immersion
Jupyter Notebook
1,799
star
14

vertex-ai-samples

Notebooks, code samples, sample apps, and other resources that demonstrate how to use, develop and manage machine learning and generative AI workflows using Google Cloud Vertex AI.
Jupyter Notebook
1,659
star
15

java-docs-samples

Java and Kotlin Code samples used on cloud.google.com
Java
1,610
star
16

ml-design-patterns

Source code accompanying O'Reilly book: Machine Learning Design Patterns
Jupyter Notebook
1,600
star
17

continuous-deployment-on-kubernetes

Get up and running with Jenkins on Google Kubernetes Engine
Shell
1,582
star
18

cloudml-samples

Cloud ML Engine repo. Please visit the new Vertex AI samples repo at https://github.com/GoogleCloudPlatform/vertex-ai-samples
Python
1,516
star
19

cloud-foundation-fabric

End-to-end modular samples and landing zones toolkit for Terraform on GCP.
HCL
1,509
star
20

localllm

Python
1,505
star
21

cloud-builders

Builder images and examples commonly used for Google Cloud Build
Go
1,374
star
22

cloud-sql-proxy

A utility for connecting securely to your Cloud SQL instances
Go
1,263
star
23

cloud-builders-community

Community-contributed images for Google Cloud Build
Go
1,258
star
24

berglas

A tool for managing secrets on Google Cloud
Go
1,236
star
25

data-science-on-gcp

Source code accompanying book: Data Science on the Google Cloud Platform, Valliappa Lakshmanan, O'Reilly 2017
Jupyter Notebook
1,230
star
26

kubernetes-engine-samples

Sample applications for Google Kubernetes Engine (GKE)
HCL
1,228
star
27

functions-framework-nodejs

FaaS (Function as a service) framework for writing portable Node.js functions
TypeScript
1,162
star
28

DataflowTemplates

Cloud Dataflow Google-provided templates for solving in-Cloud data tasks
Java
1,135
star
29

bigquery-utils

Useful scripts, udfs, views, and other utilities for migration and data warehouse operations in BigQuery.
Java
1,117
star
30

cloud-vision

Sample code for Google Cloud Vision
Python
1,097
star
31

bank-of-anthos

Retail banking sample application showcasing Kubernetes and Google Cloud
Java
994
star
32

buildpacks

Builders and buildpacks designed to run on Google Cloud's container platforms
Go
982
star
33

php-docs-samples

A collection of samples that demonstrate how to call Google Cloud services from PHP.
PHP
961
star
34

cloud-foundation-toolkit

The Cloud Foundation toolkit provides GCP best practices as code.
Go
958
star
35

deploymentmanager-samples

Deployment Manager samples and templates.
Jinja
938
star
36

flask-talisman

HTTP security headers for Flask
Python
896
star
37

k8s-config-connector

GCP Config Connector, a Kubernetes add-on for managing GCP resources
Go
891
star
38

gsutil

A command line tool for interacting with cloud storage services.
Python
874
star
39

DataflowJavaSDK

Google Cloud Dataflow provides a simple, powerful model for building both batch and streaming parallel data processing pipelines.
857
star
40

nodejs-getting-started

A tutorial for creating a complete application using Node.js on Google Cloud Platform
JavaScript
806
star
41

magic-modules

Add Google Cloud Platform support to Terraform
Go
804
star
42

gcr-cleaner

Delete untagged image refs in Google Container Registry or Artifact Registry
Go
802
star
43

keras-idiomatic-programmer

Books, Presentations, Workshops, Notebook Labs, and Model Zoo for Software Engineers and Data Scientists wanting to learn the TF.Keras Machine Learning framework
Jupyter Notebook
797
star
44

metacontroller

Lightweight Kubernetes controllers as a service
Go
790
star
45

awesome-google-cloud

A curated list of awesome stuff for Google Cloud.
777
star
46

mlops-on-gcp

Jupyter Notebook
773
star
47

getting-started-python

Code samples for using Python on Google Cloud Platform
Python
756
star
48

dotnet-docs-samples

.NET code samples used on https://cloud.google.com
C#
736
star
49

click-to-deploy

Source for Google Click to Deploy solutions listed on Google Cloud Marketplace.
Python
729
star
50

iap-desktop

IAP Desktop is a Windows application that provides zero-trust Remote Desktop and SSH access to Linux and Windows VMs on Google Cloud.
C#
708
star
51

cloud-sdk-docker

Google Cloud CLI Docker Image - Docker Image containing the gcloud CLI and its bundled components.
Dockerfile
697
star
52

tf-estimator-tutorials

This repository includes tutorials on how to use the TensorFlow estimator APIs to perform various ML tasks, in a systematic and standardised way
Jupyter Notebook
671
star
53

functions-framework-python

FaaS (Function as a service) framework for writing portable Python functions
Python
670
star
54

flink-on-k8s-operator

[DEPRECATED] Kubernetes operator for managing the lifecycle of Apache Flink and Beam applications.
Go
657
star
55

terraform-google-examples

Collection of examples for using Terraform with Google Cloud Platform.
HCL
573
star
56

functions-framework-dart

FaaS (Function as a service) framework for writing portable Dart functions
Dart
535
star
57

cloud-run-button

Let anyone deploy your GitHub repos to Google Cloud Run with a single click
Go
527
star
58

bigquery-oreilly-book

Source code accompanying: BigQuery: The Definitive Guide by Lakshmanan & Tigani to be published by O'Reilly Media
Jupyter Notebook
523
star
59

govanityurls

Use a custom domain in your Go import path
Go
518
star
60

ml-on-gcp

Machine Learning on Google Cloud Platform
Python
484
star
61

getting-started-java

Java
478
star
62

ipython-soccer-predictions

Sample iPython notebook with soccer predictions
Jupyter Notebook
473
star
63

monitoring-dashboard-samples

Google Cloud Monitoring Dashboard Samples
TypeScript
471
star
64

covid-19-open-data

Datasets of daily time-series data related to COVID-19 for over 20,000 distinct locations around the world.
Python
471
star
65

ai-platform-samples

Official Repo for Google Cloud AI Platform. Find samples for Vertex AI, Google Cloud's new unified ML platform at: https://github.com/GoogleCloudPlatform/vertex-ai-samples
Jupyter Notebook
457
star
66

hackathon-toolkit

GCP Hackathon Toolkit
HTML
440
star
67

gradle-appengine-templates

Freemarker based templates that build with the gradle-appengine-plugin
439
star
68

distributed-load-testing-using-kubernetes

Distributed load testing using Kubernetes on Google Container Engine
Smarty
438
star
69

terraform-validator

Terraform Validator is not an officially supported Google product; it is a library for conversion of Terraform plan data to CAI Assets. If you have been using terraform-validator directly in the past, we recommend migrating to `gcloud beta terraform vet`.
Go
437
star
70

cloud-code-vscode

Cloud Code for Visual Studio Code: Issues, Documentation and more
416
star
71

nodejs-docker

The Node.js Docker image used by Google App Engine Flexible.
TypeScript
407
star
72

cloud-ops-sandbox

Cloud Operations Sandbox is an open source collection of tools that helps practitioners to learn O11y and R9y practices from Google and apply them using Cloud Operations suite of tools.
HCL
405
star
73

professional-services-data-validator

Utility to compare data between homogeneous or heterogeneous environments to ensure source and target tables match
Python
403
star
74

k8s-stackdriver

Go
390
star
75

cloud-code-samples

Code templates to make working with Kubernetes feel like editing and debugging local code.
Java
387
star
76

healthcare

Python
374
star
77

require-so-slow

`require`s taking too much time? Profile 'em.
TypeScript
373
star
78

functions-framework-go

FaaS (Function as a service) framework for writing portable Go functions
Go
373
star
79

k8s-multicluster-ingress

kubemci: Command line tool to configure L7 load balancers using multiple kubernetes clusters
Go
372
star
80

compute-image-packages

Packages for Google Compute Engine Linux images.
Python
370
star
81

android-docs-samples

Java
365
star
82

stackdriver-errors-js

Client-side JavaScript exception reporting library for Cloud Error Reporting
JavaScript
358
star
83

applied-ai-engineering-samples

This repository compiles code samples and notebooks demonstrating how to use Generative AI on Google Cloud Vertex AI.
Jupyter Notebook
344
star
84

mlops-with-vertex-ai

An end-to-end example of MLOps on Google Cloud using TensorFlow, TFX, and Vertex AI
Jupyter Notebook
343
star
85

google-cloud-iot-arduino

Google Cloud IOT Example on ESP8266
C++
340
star
86

istio-samples

Istio demos and sample applications for GCP
Shell
331
star
87

ios-docs-samples

iOS samples that demonstrate APIs and services of Google Cloud Platform.
Swift
325
star
88

cloud-code-intellij

Plugin to support the Google Cloud Platform in IntelliJ IDEA - Docs and Issues Repository
319
star
89

security-analytics

Community Security Analytics provides a set of community-driven audit & threat queries for Google Cloud
Python
315
star
90

gke-networking-recipes

Shell
307
star
91

gcping

The source for the CLI and web app at gcping.com
Go
303
star
92

solutions-terraform-cloudbuild-gitops

HCL
301
star
93

spring-cloud-gcp

New home for Spring Cloud GCP development starting with version 2.0.
Java
299
star
94

airflow-operator

Kubernetes custom controller and CRDs to managing Airflow
Go
296
star
95

genai-for-marketing

Showcasing Google Cloud's generative AI for marketing scenarios via application frontend, backend, and detailed, step-by-step guidance for setting up and utilizing generative AI tools, including examples of their use in crafting marketing materials like blog posts and social media content, nl2sql analysis, and campaign personalization.
Jupyter Notebook
296
star
96

elixir-samples

A collection of samples on using Elixir with Google Cloud Platform.
Elixir
291
star
97

gcpdiag

gcpdiag is a command-line diagnostics tool for GCP customers.
Python
288
star
98

kotlin-samples

Kotlin
285
star
99

compute-archlinux-image-builder

A tool to build a Arch Linux Image for GCE
Shell
284
star
100

datalab-samples

Jupyter Notebook
281
star