• Stars
    star
    835
  • Rank 54,198 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created over 3 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

An end-to-end implementation of intent prediction with Metaflow and other cool tools

You Don't Need a Bigger Boat

An end-to-end (Metaflow-based) implementation of an intent prediction (and session recommendation) flow for kids who can't MLOps good and wanna learn to do other stuff good too.

After few months of iterations, this project is now stable. Quick Links:

  • Our MLOps blog series is completed;
  • A new open source repo has been released, showing a simplified version of many of the concepts in this project, to provide a gentler entry point into modern MLOps pipelines;
  • A second open source repo has been released in collaboration with Outerbounds and NVIDIA, showing a Merlin-focused version of many of the concepts in this project.

If you find this project (or its sister repositories above) useful, please add a star to help us spread the word!

Philosophical Motivations

There are plenty of tutorials and blog posts around the Internet on data pipelines and tooling. However:

  • they (for good pedagogical reasons) tend to focus on one tool / step at a time, leaving us to wonder how the rest of the pipeline works;
  • they (for good pedagogical reasons) tend to work in a toy-world fashion, leaving us to wonder what would happen when a real dataset and a real-world problem enter the scene.

This repository (and soon-to-be-drafted written tutorial) aims to fill these gaps. In particular:

  • we provide open-source working code that glues together what we believe are some of the best tools in the ecosystem, going all the way from raw data to a deployed endpoint serving predictions;
  • we run the pipeline under a realistic load for companies at "reasonable scale", leveraging a huge open dataset we released in 2021; moreover, we train a model for a real-world use case, and show how to monitor it after deployment.

The repo may also be seen as a (very opinionated) introduction to modern, PaaS-like pipelines (as also discussed here); while there is obviously room for disagreement over tool X or tool Y, we believe the general principles to be sound for companies at "reasonable scale": in-between bare-bone infrastructure for Tech Giants, and ready-made solutions for low-code/simple scenarios, there is a world of exciting machine learning at scale for sophisticated practitioners who don't want to waste their time managing cloud resources.

Note #1: while the code is provided as an end-to-end solution, we may sacrifice some terseness for clarity / pedagogical reasons.

Note #2: when we say the pipeline is an "end-to-end solution", we mean it - it goes from millions of raw events to a working endpoint that you can ping. As such, there are many moving pieces and it may take a while to understand how all the pieces fit together: this is not meant to be a recipe for building a small ML-powered feature, but a template for building an entire AI company (at least, the beginning of one) - as such, the learning curve is a bit steeper, but you will be rewarded with a ML stack tried and tested at unicorn scale.

Note #3: starting June 2022, a new repo is available, showcasing how to join dataOps and MLOps in a simplified, yet realistic environment: check it out as a gentler introduction to the same concepts!

Overview

The repo shows how several (mostly open-source) tools can be effectively combined together to run data pipelines at scale with very small teams. The project now features:

The following picture from Recsys gives a quick overview of a similar pipeline:

Recsys flow

We provide two versions of the pipeline, depending on the sophistication of the setup:

  • a Metaflow-only version, which runs from static data files to Sagemaker as a single Flow, and can be run from a Metaflow-enabled laptop without much additional setup;
  • a data warehouse version, which runs in a more realistic setup, reading data from Snowflake and using an external orchestrator to run the steps. In this setup, the downside is that a Snowflake and a Prefect Cloud accounts are required (nonetheless, both are veasy to get); the upside is that the pipeline reflects almost perfectly a real setup, and Metaflow can be used specifically for the ML part of the process.

The parallelism between the two scenarios should be pretty clear by looking at the two projects: if you are familiarizing with all the tools for the first time, we suggest you to start from the Metaflow version and then move to the full-scale one when all the pieces of the puzzle are well understood.

Note: if you are new to Metaflow, we recommend you to go through the official installation and this stand-alone tutorial first.

Relevant Material

If you want to know more, you can give a look at the following material:

Setup

General Prerequisites (do this first!)

Irrespectively of the flow you wish to run, some general tools need to be in place: Metaflow of course, as the heart of our ML practice, but also data and AWS users/roles. Please go through the general items below before tackling the flow-specific instructions.

After you finish the prerequisites below, you can run the flow you desire: each folder - remote and local - contains a specific README which should allow you to quickly run the project end-to-end: please refer to that documentation for flow-specific instructions.

Dataset

The project leverages the open dataset from the 2021 Coveo Data Challenge: the dataset can be downloaded directly from here (refer to the full README for terms and conditions). Data is freely available under a research-friendly license - for background information on the dataset, the use cases and relevant work in the ML literature, please refer to the accompanying paper.

Once you download and unzip the dataset in a local folder of your choice (the zip contains 3 files, browsing_train.csv, search_train.csv, sku_to_content.csv), write down their location as an absolute path (e.g. /Users/jacopo/Documents/data/train/browsing_train.csv): both projects need to know where the dataset is.

AWS

Both projects - remote and local - use AWS services extensively - and by design: this ties back to our philosophy of PaaS-whenever-possible, and play nicely with our core adoption of Metaflow. While you can setup your users in many functionally equivalent ways, note that if you want to run the pipeline from ingestion to serving you need to be comfortable with the following AWS interactions:

  • Metaflow stack (see below): we assume you installed the Metaflow stack and can run it with an AWS profile of your choice;
  • Serverless stack (see below): we assume you can run serverless deploy in your AWS stack;
  • Sagemaker user: we assume you have an AWS user with permissions to manage Sagemaker endpoints (it may be totally distinct from any other Metaflow user).

Serverless

We wrap Sagemaker predictions in a serverless REST endpoint provided by AWS Lambda and API Gateway. To manage the lambda stack we use Serverless as a wrapper around AWS infrastructure.

Metaflow

Metaflow: Configuration

If you have an AWS profile configured with a metaflow-friendly user, and you created metaflow stack with CloudFormation, you can run the following command with the resources created by CloudFormation to set up metaflow on AWS:

metaflow configure aws --profile metaflow

Remember to use METAFLOW_PROFILE=metaflow to use this profile when running a flow. Once you completed the setup, you can run flow_playground.py to test the AWS setup is working as expected (in particular, GPU batch jobs can run correctly). To run the flow with the custom profile created, you should do:

METAFLOW_PROFILE=metaflow python flow_playground.py run

Metaflow: Tips & Tricks
  1. Parallelism Safe Guard
    • The flag --max-workers should be used to limit the maximum number of parallel steps
    • For example METAFLOW_PROFILE=metaflow python flow_playground.py run --max-workers 8 limits the maximum number of parallel tasks to 8
  2. Environment Variables in AWS Batch
    • The @environment decorator is used in conjunction with @batch to pass environment variables to AWS Batch, which will not directly have access to env variables on your local machine
    • In the local example, we use @environemnt to pass the Weights & Biases API Key (amongst other things)
  3. Resuming Flows
    • Resuming flows is useful during development to avoid re-running compute/time intensive steps such as data preparation
    • METAFLOW_PROFILE=metaflow python flow_playground.py resume <STEP_NAME> --origin-run-id <RUN_ID>
  4. Local-Only execution
    • It may sometimes be useful to debug locally (i.e to avoid Batch startup latency), we introduce a wrapper enable_decorator around the @batch decorator which enables or disables a decorator's functionality
    • We use this in conjunction with an environment variable EN_BATCH to toggle the functionality of all @batch decorators.

FAQ

  1. Both projects deal with data that has already been ingested/transmitted to the pipeline, but are silent on data collection. Any serverless option there as well?

    Yes. In e-commerce use cases, for example, pixel tracking is standard (e.g. Google Analytics), so a serverless /collect endpoint can be used to get front-end data. In January 2022, we released a new blog post and open-source repository describing in detail a principled and serverless approach to this problem.

  2. What is missing / could be added if I wanted to collaborate on this project?

    Few obvious things that are missing are: i) add GitHub actions for CI/CD; ii) standardize AWS permissions (as now most all commands work when launched as admin users). Want to join us? Please reach out!

Contributors

How to Cite our Work

If you find our principles, code or data useful, please cite our work:

Paper (RecSys2021)

@inproceedings{10.1145/3460231.3474604,
author = {Tagliabue, Jacopo},
title = {You Do Not Need a Bigger Boat: Recommendations at Reasonable Scale in a (Mostly) Serverless and Open Stack},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3460231.3474604},
doi = {10.1145/3460231.3474604},
series = {RecSys '21}
}

Data

@inproceedings{CoveoSIGIR2021,
author = {Tagliabue, Jacopo and Greco, Ciro and Roy, Jean-Francis and Bianchi, Federico and Cassani, Giovanni and Yu, Bingqing and Chia, Patrick John},
title = {SIGIR 2021 E-Commerce Workshop Data Challenge},
year = {2021},
booktitle = {SIGIR eCom 2021}
}

More Repositories

1

reclist

Behavioral "black-box" testing for recommender systems
Python
408
star
2

MLSys-NYU-2022

Slides, scripts and materials for the Machine Learning in Finance Course at NYU Tandon, 2022
Jupyter Notebook
328
star
3

recs-at-resonable-scale

Recommendations at "Reasonable Scale": joining dataOps with recSys through dbt, Merlin and Metaflow
Python
224
star
4

post-modern-stack

Joining the modern data stack with the modern ML stack
Python
187
star
5

foundation-models-for-dbt-entity-matching

Playground for using large language models into the Modern Data Stack for entity matching
Python
105
star
6

FREE_7773

Materials for my 2021 NYU class on NLP and ML Systems (Master of Engineering).
Jupyter Notebook
96
star
7

paas-data-ingestion

Ingesting data with Pulumi, AWS lambdas and Snowflake in a scalable, fully replayable manner
PLpgSQL
66
star
8

tensorflow_to_lambda_serverless

Serve tensorflow models prediction from AWS lambda endpoints
Python
58
star
9

no-ops-machine-learning

A PaaS End-to-End ML Setup with Metaflow, Serverless and SageMaker.
Python
36
star
10

dag-card-is-the-new-model-card

Template-based generation of DAG cards from Metaflow classes, inspired by Google cards for machine learning models.
Python
29
star
11

retail-personalization-workshop

In-Session Personalization Workshop for eCommerce, April 2021, and the MICES Workshop in June 2021.
Jupyter Notebook
21
star
12

anki-drive-python-sdk

Python+node wrapper to read/send message from/to Anki Overdrive bluetooth vehicles.
Python
17
star
13

clothes-in-space

Personalization with deep learning in 100 lines of code
Jupyter Notebook
14
star
14

pixel_from_lambda

Serve a 1x1 GIF pixel from an AWS lambda-powered endpoint
Python
13
star
15

MLSys-NYU-2023

Slides, scripts and materials for the Machine Learning in Finance course at NYU Tandon, 2023.
Jupyter Notebook
12
star
16

spark_tree2lambda

Python micro-service to serve a decision tree trained with Spark through AWS Lambda
Jupyter Notebook
9
star
17

session-path

SessionPath is a deep learning model that provides personalized category suggestions for type-ahead APIs. This repo re-implements the original paper (https://arxiv.org/abs/2005.12781) leveraging Ludwig capabilities.
Python
6
star
18

tarski-2.0

Old-style computational semantics at the time of Python 3.6
Python
5
star
19

magic-the-gpthering

Playground for generating cards in the style of "Magic The Gathering" using generative AI
Python
4
star
20

webppl_to_lambda_serverless

Deploying a webppl probabilistic program as an (AWS lambda) endpoint.
JavaScript
4
star
21

On-the-plurality-of-graphs

WIP code for the "on the plurality of graphs" paper
Jupyter Notebook
3
star
22

jacopotagliabue.github.io

Personal website
2
star
23

how-much-is-a-billion

Generating meaningful perspectives with NLP and Probabilistic Programming.
JavaScript
2
star