• Stars
    star
    334
  • Rank 126,264 (Top 3 %)
  • Language
    Jupyter Notebook
  • License
    MIT License
  • Created about 4 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Deploy and scale serverless machine learning app - in 4 steps.

Build and Deploy Cartoonify: a Serverless Machine Learning App

Buy Me A Coffee

This repo contains all the code needed to run, build and deploy Cartoonify: a toy app I made from scratch to turn your pictures into cartoons.

Here's what motivated me in starting this project:

  • Give GANs a try. I've been fascinated by these models lately. Trying the CartoonGAN model to turn your face into a cartoon seemed like real fun

  • Learn about deploying an application on a serverless architecture using different services of AWS (Lambda, API Gateway, S3, etc.)

  • Practice my React skills. I was so damn bored of Plotly, Dash and Streamlit. I wanted, for once, to build something custom and less mainstream

  • Use Netlify to deploy this React app. I saw demos of how easy this process was and I wanted to try it to convince myself

If you're interested in this project, here's a short introduction πŸŽ₯

0. Some prerequisites to build and deploy Cartoonify πŸ› 

If you want to run and deploy Cartoonify, here are some prerequisites first:

  • An AWS account (don't worry, deploying this app will cost you almost nothing)
  • A free account on Netlify
  • Docker installed on your machine
  • node and npm (preferably the latest versions) installed on your machine
  • torch and torchvision to test CartoonGAN locally (optional)

All set? you're now ready to go.

Testing cartoonGAN on Google colab

check out cartoongan/notebooks/standalone_cartoonify.ipynb

or online on Colab

Please follow these four steps:

1. Test CartoonGAN locally

Some parts of the CartoonGan code as well as the pretrained models are borrowed from this repo. A shout out to them for the great work!

This is more of an exploratory step where you get to play with the pretrained models and try them (so inference only) on some sample images.

If you're interested in the training procedure, have a look at the CartoonGAN paper

  • Download the four pretrained models first. These weights will be loaded inside the Generator model defined in cartoongan/network/Transformer.py
cd cartoongan
bash download_pth.sh
  • To test one of the four models, head over the notebook cartoongan/notebooks/CartoonGAN.ipynb and change the input image path to your test image. This notebook calls cartoongan/test_from_code.py script to make the transformation.
cd cartoongan/notebooks
jupyter notebook

You can watch this section on Youtube to learn more about GANs and the CartoonGAN model

2. Deploy CartoonGAN on a serverless API using AWS Lambda

The goal of this section is to deploy the CartoonGAN model on a serverless architecture so that it can be requested through an API endpoint ... from the internet πŸ’»

Why does a serverless architecture matter?

In a serverless architecture using Lambda functions, for example, you don't have to provision servers yourself. Roughly speaking, you only write the code that'll be execuded and list its dependencies and AWS will manage the servers for you automatically and take care of the infrastructure.

This has a lot of benefits:

  1. Cost efficiency: you don't have to pay for a serverless architecture when you don't use it. On the opposite, when you have an EC2 machine running and not processing any request, you still pay for it.

  2. Scalability: if a serverless application starts having a lot of requests at the same time, AWS will scale it by allocating more power to manage the load. If you had the manage the load by yourself using EC2 instances, you would do this by manually allocating more machines and creating a load balancer.

Of course, Serverless architectures cannot be a perfect fit for any use-case. In some situations, they are not practical at all (need for real-time or quick responses, use of WebSocket, heavy processing, etc.).

Since I frequently build machine learning models and integrate them into web applications, I found that a serverless architecture was interesting in these specific use-cases. Of course, here the models are used in inference only ⚠️

Cartoonify workflow

Here's the architecture of the app:

  • On the right side, we have a frontend interface in React and on the left side, we have a backend deployed on a serverless AWS architecture.

  • The backend and the frontend communicate with each other over HTTP requests. Here is the workflow:

    • An image is sent from the client through a POST request
    • The image is then received via API Gateway
    • API Gateway triggers a Lambda function to execute and passes the image to it
    • The Lambda function starts running: it first fetches the pretrained models from S3 and then applies the style transformation on it
    • Once the Lambda function is done running, it sends the transformed image back to the client through API Gateway.

Deploy using the Serverless framework

We are going to define and deploy this architecture by writing it as a Yaml file using the Serverless framework: an open-source tool to automate deployment to AWS, Azure, Google Cloud, etc.

Here are the steps to follow:

  1. Install the serverless framework on your machine
npm install -g serverless
  1. Create an IAM user on AWS with administrator access and name it cartoonify. Then configure serverless with this user's credentials:
serverless config credentials --provider aws \
                              --key <ACCESS_KEY> \
                              --secret <SECRET_KEY> \
                              --profile cartoonify
  1. bootstrap a serverless project with a python template at the root of this project
serverless create --template aws-python --path backend

From now on, you can either follow the steps from 4 to 10 to understand what happens, or run the code you just cloned to deploy the app.

If you're in hurry, just run these two commands:

cd backend/
npm install
sls deploy
  1. install two Serverless plugins:
sls plugin install -n serverless-python-requirements
npm install --save-dev serverless-plugin-warmup
  1. Create a folder called network inside backend and put the following two files in it:

    • Transformer.py: a script that holds the architecture of the generator model.
    • A blank __init__.py
  2. Modify the serverless.yml file with the following sections:

# The provider section where we setup the provider, the runtime and the permissions:

provider:
  name: aws
  runtime: python3.7
  profile: cartoonify
  region: eu-west-3
  timeout: 60
  iamRoleStatements:
      - Effect: Allow
      Action:
          - s3:getObject
      Resource: arn:aws:s3:::cartoongan/models/*
      - Effect: Allow
      Action:
          - "lambda:InvokeFunction"
      Resource: "*"

# The custom section where we configure the plugins:
custom:
  pythonRequirements:
  dockerizePip: true
  zip: true
  slim: true
  strip: false
  noDeploy:
    - docutils
    - jmespath
    - pip
    - python-dateutil
    - setuptools
    - six
    - tensorboard
  useStaticCache: true
  useDownloadCache: true
  cacheLocation: "./cache"
  warmup:
  events:
    - schedule: "rate(5 minutes)"
  timeout: 50

# The package section where we exclude folders from production
package:
  individually: false
  exclude:
    - package.json
    - package-log.json
    - node_modules/**
    - cache/**
    - test/**
    - __pycache__/**
    - .pytest_cache/**
    - model/pytorch_model.bin
    - raw/**
    - .vscode/**
    - .ipynb_checkpoints/**

# The functions section where we create the Lambda function and define the events that invoke it:
functions:
  transformImage:
    handler: src/handler.lambda_handler
    memorySize: 3008
    timeout: 300
    events:
      - http:
          path: transform
          method: post
          cors: true
    warmup: true

# and finally the plugins section:
plugins:
  - serverless-python-requirements
  - serverless-plugin-warmup

  1. List the dependencies inside requirements.txt
https://download.pytorch.org/whl/cpu/torch-1.1.0-cp37-cp37m-linux_x86_64.whl
https://download.pytorch.org/whl/cpu/torchvision-0.3.0-cp37-cp37m-linux_x86_64.whl
Pillow==6.2.1
  1. Create an src folder inside backend and put handler.py in it to define the lambda function. Then modify handler.py
# Define imports
try:
    import unzip_requirements
except ImportError:
    pass

import json
from io import BytesIO
import time
import os
import base64

import boto3
import numpy as np
from PIL import Image

import torch
import torchvision.transforms as transforms
from torch.autograd import Variable
import torchvision.utils as vutils
from network.Transformer import Transformer

# Define two functions inside handler.py: img_to_base64_str to
# convert binary images to base64 format and load_models to
# load the four pretrained model inside a dictionary and then
# keep them in memory

def img_to_base64_str(img):
    buffered = BytesIO()
    img.save(buffered, format="PNG")
    buffered.seek(0)
    img_byte = buffered.getvalue()
    img_str = "data:image/png;base64," + base64.b64encode(img_byte).decode()
    return img_str


def load_models(s3, bucket):
    styles = ["Hosoda", "Hayao", "Shinkai", "Paprika"]
    models = {}

    for style in styles:
        model = Transformer()
        response = s3.get_object(
            Bucket=bucket, Key=f"models/{style}_net_G_float.pth")
        state = torch.load(BytesIO(response["Body"].read()))
        model.load_state_dict(state)
        model.eval()
        models[style] = model

    return models

def lambda_handler(event, context):
  """
  lambda handler to execute the image transformation
  """
  # warming up the lambda
  if event.get("source") in ["aws.events", "serverless-plugin-warmup"]:
      print('Lambda is warm!')
      return {}

  # extracting the image form the payload and converting it to PIL format
  data = json.loads(event["body"])
  print("data keys :", data.keys())
  image = data["image"]
  image = image[image.find(",")+1:]
  dec = base64.b64decode(image + "===")
  image = Image.open(BytesIO(dec))
  image = image.convert("RGB")

  # loading the model with the selected style based on the model_id payload
  model_id = int(data["model_id"])
  style = mapping_id_to_style[model_id]
  model = models[style]

  # resize the image based on the load_size payload
  load_size = int(data["load_size"])

  h = image.size[0]
  w = image.size[1]
  ratio = h * 1.0 / w
  if ratio > 1:
      h = load_size
      w = int(h*1.0 / ratio)
  else:
      w = load_size
      h = int(w * ratio)

  image = image.resize((h, w), Image.BICUBIC)
  image = np.asarray(image)

  # convert PIL image from  RGB to BGR
  image = image[:, :, [2, 1, 0]]
  image = transforms.ToTensor()(image).unsqueeze(0)

  # transform values to (-1, 1)
  image = -1 + 2 * image
  if gpu > -1:
      image = Variable(image, volatile=True).cuda()
  else:
      image = image.float()

  # style transformation
  with torch.no_grad():
      output_image = model(image)
      output_image = output_image[0]

  # convert PIL image from BGR back to RGB
  output_image = output_image[[2, 1, 0], :, :]

  # transform values back to (0, 1)
  output_image = output_image.data.cpu().float() * 0.5 + 0.5

  # convert the transformed tensor to a PIL image
  output_image = output_image.numpy()
  output_image = np.uint8(output_image.transpose(1, 2, 0) * 255)
  output_image = Image.fromarray(output_image)

  # convert the PIL image to base64
  result = {
      "output": img_to_base64_str(output_image)
  }

  # send the result back to the client inside the body field
  return {
      "statusCode": 200,
      "body": json.dumps(result),
      "headers": {
          'Content-Type': 'application/json',
          'Access-Control-Allow-Origin': '*'
      }
  }
  1. Start docker

  2. Deploy πŸš€

    cd backend/
    sls deploy

Deployment may take up to 5 - 8 minutes, so go grab a β˜•.

Once the lambda function deployed, you'll be prompted a URL of the API. Go to jupyter notebook to test it:

You can watch this section on Youtube to get every detail of it.

3. Build a React interface

  • Before running the React app and building it, you'll have to specify the API url of the model you just deployed. Go inside fontend/src/api.js and change the value of baseUrl

  • To run the React app locally:

cd frontend/
yarn install
yarn start

This will start it at: http://localhost:3000

  • To build the app before deploying it to Netlify
yarn build

This will create a build/ folder that contains a build of the application to be served on Netlify.

You can watch this section on Youtube to understand how the code is structured.

4. Deploy the React app on Netlify

  • To be able to deploy on Netlify you'll need an account. It's free, head over this link to sign up.

  • Then you'll need to install netlify-cli

npm install netlify-cli -g
  • Authenticate the Netlify client with your account
netlify login
  • Deploy πŸš€
cd app/
netlify deploy

You can watch this section on Youtube to understand how easy the deployment on Netlify can be.

5. Want to contribute ? 😁

If you've made this far, I sincerely thank you for your time!

If you liked this project and want to improve it, be my guest: I'm open to pull requests.

More Repositories

1

Neural-Network-from-scratch

Ever wondered how to code your Neural Network using NumPy, with no frameworks involved?
Jupyter Notebook
258
star
2

character-based-cnn

Implementation of character based convolutional neural network
Python
251
star
3

How-to-score-0.8134-in-Titanic-Kaggle-Challenge

Solution of the Titanic Kaggle competition
Jupyter Notebook
117
star
4

mrnet

Building an ACL tear detector to spot knee injuries from MRIs with PyTorch (MRNet)
Python
100
star
5

How-to-mine-newsfeed-data-and-extract-interactive-insights-in-Python

A practical guide to topic mining and interactive visualizations
HTML
75
star
6

anonymization-api

How to build and deploy an anonymization API with FastAPI
Python
66
star
7

playground

A Streamlit application to play with machine learning models directly from the browser
Python
61
star
8

Understanding-deep-Convolutional-Neural-Networks-with-a-practical-use-case-in-Tensorflow-and-Keras

What makes convnets so powerful at image classification?
Jupyter Notebook
43
star
9

overview-and-benchmark-of-traditional-and-deep-learning-models-in-text-classification

NLP tutorial
Jupyter Notebook
41
star
10

mlflow

Introduction to MLflow with a demo locally and how to set it on AWS
Jupyter Notebook
37
star
11

whales-classification

My solution to the Global Data Science Challenge
Jupyter Notebook
36
star
12

dataset-builder

A script to help you quickly build custom computer vision datasets
Python
32
star
13

audiolizr

A bentoML-powered API to transcribe audio and make sense of it
Python
30
star
14

keywords-extractor-with-bert

A Streamlit app to extract keywords using KeyBert
Jupyter Notebook
26
star
15

React-App-Flask-SSL

JavaScript
23
star
16

multi-label-sentiment-classifier

How to build a multi-label sentiment classifiers with Tez and PyTorch
Jupyter Notebook
17
star
17

anonymizer

Text Anonymization app with Streamlit and Spacy
Python
16
star
18

fastapi-ssl

Python
14
star
19

twitter-agent

Scrape Tweets and chat with them using Langchain
Python
10
star
20

Quora-Insincere-Questions-Classification

Solutions of the Kaggle competition: Quora Insincere Questions Classification
Jupyter Notebook
8
star
21

scraping-tutorial

code for data scraping Youtube tutorial
Jupyter Notebook
8
star
22

assemblyai

Python
7
star
23

streamlit-cookiecutter

A cookiecutter template to bootstrap streamlit projects
Shell
6
star
24

streamlit_tutorial

Python
2
star
25

covidbert-topic-mining

How to use CovidBERT to extract topics for covid-19 papers
Jupyter Notebook
2
star
26

covid-tweets

Python
2
star
27

ahmedbesbes

My Github stats
1
star
28

cnn_char_utils

Python
1
star