• Stars
    star
    1,845
  • Rank 25,153 (Top 0.5 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 6 years ago
  • Updated about 5 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Provide an input CSV and a target field to predict, generate a model + code to run it.

automl-gs

console gif

Give an input CSV file and a target field you want to predict to automl-gs, and get a trained high-performing machine learning or deep learning model plus native Python code pipelines allowing you to integrate that model into any prediction workflow. No black box: you can see exactly how the data is processed, how the model is constructed, and you can make tweaks as necessary.

demo output

automl-gs is an AutoML tool which, unlike Microsoft's NNI, Uber's Ludwig, and TPOT, offers a zero code/model definition interface to getting an optimized model and data transformation pipeline in multiple popular ML/DL frameworks, with minimal Python dependencies (pandas + scikit-learn + your framework of choice). automl-gs is designed for citizen data scientists and engineers without a deep statistical background under the philosophy that you don't need to know any modern data preprocessing and machine learning engineering techniques to create a powerful prediction workflow.

Nowadays, the cost of computing many different models and hyperparameters is much lower than the opportunity cost of an data scientist's time. automl-gs is a Python 3 module designed to abstract away the common approaches to transforming tabular data, architecting machine learning/deep learning models, and performing random hyperparameter searches to identify the best-performing model. This allows data scientists and researchers to better utilize their time on model performance optimization.

  • Generates native Python code; no platform lock-in, and no need to use automl-gs after the model script is created.
  • Train model configurations super-fast for free using a TPU and TensorFlow in Google Colaboratory. (in Beta: you can access the Colaboratory notebook here).
  • Handles messy datasets that normally require manual intervention, such as datetime/categorical encoding and spaced/parenthesized column names.
  • Each part of the generated model pipeline is its own function w/ docstrings, making it much easier to integrate into production workflows.
  • Extremely detailed metrics reporting for every trial stored in a tidy CSV, allowing you to identify and visualize model strengths and weaknesses.
  • Correct serialization of data pipeline encoders on disk (i.e. no pickled Python objects!)
  • Retrain the generated model on new data without making any code/pipeline changes.
  • Quit the hyperparameter search at any time, as the results are saved after each trial.
  • Training progress bars with ETAs for both the overall experiment and per-epoch during the experiment.

The models generated by automl-gs are intended to give a very strong baseline for solving a given problem; they're not the end-all-be-all that often accompanies the AutoML hype, but the resulting code is easily tweakable to improve from the baseline.

You can view the hyperparameters and their values here, and the metrics that can be optimized here. Some of the more controversial design decisions for the generated models are noted in DESIGN.md.

Framework Support

Currently automl-gs supports the generation of models for regression and classification problems using the following Python frameworks:

  • TensorFlow (via tf.keras) | tensorflow
  • XGBoost (w/ histogram binning) | xgboost

To be implemented:

  • Catboost | catboost
  • LightGBM | lightgbm

How to Use

automl-gs can be installed via pip:

pip3 install automl_gs

You will also need to install the corresponding ML/DL framework (e.g. tensorflow/tensorflow-gpu for TensorFlow, xgboost for xgboost, etc.)

After that, you can run it directly from the command line. For example, with the famous Titanic dataset:

automl_gs titanic.csv Survived

If you want to use a different framework or configure the training, you can do it with flags:

automl_gs titanic.csv Survived --framework xgboost --num_trials 1000

You may also invoke automl-gs directly from Python. (e.g. via a Jupyter Notebook)

from automl_gs import automl_grid_search

automl_grid_search('titanic.csv', 'Survived')

The output of the automl-gs training is:

  • A timestamped folder (e.g. automl_tensorflow_20190317_020434) with contains:
    • model.py: The generated model file.
    • pipeline.py: The generated pipeline file.
    • requirements.txt: The generated requirements file.
    • /encoders: A folder containing JSON-serialized encoder files
    • /metadata: A folder containing training statistics + other cool stuff not yet implemented!
    • The model itself (format depends on framework)
  • automl_results.csv: A CSV containing the training results after each epoch and the hyperparameters used to train at that time.

Once the training is done, you can run the generated files from the command line within the generated folder above.

To predict:

python3 model.py -d data.csv -m predict

To retrain the model on new data:

python3 model.py -d data.csv -m train

CLI Arguments/Function Parameters

You can view these at any time by running automl_gs -h in the command line.

  • csv_path: Path to the CSV file (must be in the current directory) [Required]
  • target_field: Target field to predict [Required]
  • target_metric: Target metric to optimize [Default: Automatically determined depending on problem type]
  • framework: Machine learning framework to use [Default: 'tensorflow']
  • model_name: Name of the model (if you want to train models with different names) [Default: 'automl']
  • num_trials: Number of trials / different hyperparameter combos to test. [Default: 100]
  • split: Train-validation split when training the models [Default: 0.7]
  • num_epochs: Number of epochs / passes through the data when training the models. [Default: 20]
  • col_types: Dictionary of fields:data types to use to override automl-gs's guesses. (only when using in Python) [Default: {}]
  • gpu: For non-Tensorflow frameworks and Pascal-or-later GPUs, boolean to determine whether to use GPU-optimized training methods (TensorFlow can detect it automatically) [Default: False]
  • tpu_address: For TensorFlow, hardware address of the TPU on the system. [Default: None]

Examples

For a quick Hello World on how to use automl-gs, see this Jupyter Notebook.

Due to the size of some examples w/ generated code and accompanying data visualizations, they are maintained in a separate repository. (and also explain why there are two distinct "levels" in the example viz above!)

How automl-gs Works

TL;DR: auto-ml gs generates raw Python code using Jinja templates and trains a model using the generated code in a subprocess: repeat using different hyperparameters until done and save the best model.

automl-gs loads a given CSV and infers the data type of each column to be fed into the model. Then it tries a ETL strategy for each column field as determined by the hyperparameters; for example, a Datetime field has its hour and dayofweek binary-encoded by default, but hyperparameters may dictate the encoding of month and year as additional model fields. ETL strategies are optimized for frameworks; TensorFlow for example will use text embeddings, while other frameworks will use CountVectorizers to encode text (when training, TensorFlow will also used a shared text encoder via Keras's functional API). automl-gs then creates a statistical model with the specified framework. Both the model ETL functions and model construction functions are saved as a generated Python script.

automl-gs then runs the generated training script as if it was a typical user. Once the model is trained, automl-gs saves the training results in its own CSV, along with all the hyperparameters used to train the model. automl-gs then repeats the task with another set of hyperparameters, until the specified number of trials is hit or the user kills the script.

The best model Python script is kept after each trial, which can then easily be integrated into other scripts, or run directly to get the prediction results on a new dataset.

Helpful Notes

  • It is the user's responsibility to ensure the input dataset is high-quality. No model hyperparameter search will provide good research on flawed/unbalanced datasets. Relatedly, hyperparameter optimization may provide optimistic predictions on the validation set, which may not necessarily match the model performance in the real world.
  • A neural network approach alone may not necessarily be the best approach. Try using xgboost. The results may surprise you!
  • automl-gs is only attempting to solve tabular data problems. If you have a more complicated problem to solve (e.g. predicting a sequence of outputs), I recommend using Microsoft's NNI and Uber's Ludwig as noted in the introduction.

Known Issues

  • Issues when using Anaconda (#8). Use an installed Python is possible.
  • Issues when using Windows (#13)
  • Issues when a field name in the input dataset starts with a number (#18)

Future Work

Feature development will continue on automl-gs as long as there is interest in the package.

Top Priority

  • Add more frameworks
  • Results visualization (via plotnine)
  • Holiday support for datetimes
  • Remove redundant generated code
  • Native distributed/high level automation support (Polyaxon/Kubernetes, Airflow)
  • Image field support (both as a CSV column field, and a special flow mode to take advantage of hyperparameter tuning)
  • PyTorch model code generation.

Elsework

  • Generate script given an explicit set of hyperparameters
  • More hyperparameters.
  • Bayesian hyperparameter search for standalone version.
  • Support for generating model code for R/Julia
  • Tool for generating a Flask/Starlette REST API from a trained model script
  • Allow passing an explicit, pre-defined test set CSV.

Maintainer/Creator

Max Woolf (@minimaxir)

Max's open-source projects are supported by his Patreon. If you found this project helpful, any monetary contributions to the Patreon are appreciated and will be put to good creative use.

License

MIT

The code generated by automl-gs is unlicensed; the owner of the generated code can decide the license.

More Repositories

1

big-list-of-naughty-strings

The Big List of Naughty Strings is a list of strings which have a high probability of causing issues when used as user-input data.
Python
46,104
star
2

textgenrnn

Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code.
Python
4,941
star
3

hacker-news-undocumented

Some of the hidden norms about Hacker News not otherwise covered in the Guidelines and the FAQ.
3,616
star
4

simpleaichat

Python package for easily interfacing with chat apps, with robust features and minimal code complexity.
Python
3,463
star
5

gpt-2-simple

Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts
Python
3,402
star
6

facebook-page-post-scraper

Data scraper for Facebook Pages, and also code accompanying the blog post How to Scrape Data From Facebook Page Posts for Statistical Analysis
Python
2,116
star
7

person-blocker

Automatically "block" people in images (like Black Mirror) using a pretrained neural network.
Python
2,022
star
8

aitextgen

A robust Python tool for text-based AI training and generation using GPT-2.
Python
1,831
star
9

stylecloud

Python package + CLI to generate stylistic wordclouds, including gradients and icon shapes!
Python
825
star
10

gpt-3-experiments

Test prompts for OpenAI's GPT-3 API and the resulting AI-generated texts.
Python
702
star
11

video-to-gif-osx

A set of utilities that allow the user to easily convert video files to very-high-quality GIFs on OS X.
Shell
395
star
12

copy-syntax-highlight-osx

Copy Syntax Highlight for OS X is an OS X service which copies the selected text to the clipboard, with proper syntax highlighting for the given language.
381
star
13

gpt-2-cloud-run

Text-generation API via GPT-2 for Cloud Run
HTML
313
star
14

reactionrnn

Python module + R package to predict the reactions to a given text using a pretrained recurrent neural network.
Python
299
star
15

gpt-2-keyword-generation

Method to encode text for GPT-2 to generate text based on provided keywords
Python
260
star
16

download-tweets-ai-text-gen

Python script to download public Tweets from a given Twitter account into a format suitable for AI text generation.
Python
220
star
17

tweet-generator

Train a neural network optimized for generating tweets based off of any number of Twitter users.
Python
218
star
18

char-embeddings

A repository containing 300D character embeddings derived from the GloVe 840B/300D dataset, and uses these embeddings to train a deep learning model to generate Magic: The Gathering cards using Keras
Python
214
star
19

magic-the-gifening

A Twitter bot which tweets Magic: the Gathering cards with appropriate GIFs superimposed onto them.
Python
212
star
20

system-dashboard

Minimalist Win/OSX/Linux System Dashboard using Flask and Freeboard
HTML
200
star
21

imgmaker

Create high-quality images programmatically with easily-hackable templates.
Python
175
star
22

ctrl-gce

Set up the CTRL text-generating model on Google Compute Engine with just a few console commands.
Shell
151
star
23

ai-generated-pokemon-rudalle

Python script to preprocess images of all Pokémon to finetune ruDALL-E
Python
138
star
24

imgbeddings

Python package to generate image embeddings with CLIP without PyTorch/TensorFlow
Python
134
star
25

mtg-gpt-2-cloud-run

Code and UI for running a Magic card text generator API via GPT-2
HTML
120
star
26

get-all-hacker-news-submissions-comments

Simple Python scripts to download all Hacker News submissions and comments and store them in a PostgreSQL database.
Python
119
star
27

hacker-news-gpt-2

Dump of generated texts from GPT-2 trained on Hacker News titles
117
star
28

facebook-ad-library-scraper

A Python scraper for the Facebook Ad Library, using the official Facebook Ad Library API.
Python
114
star
29

reddit-bigquery

Code + Jupyter notebook for analyzing and visualizing Reddit Data quickly and easily
R
112
star
30

optillusion-animation

Python code to submit rotated images to the Cloud Vision API + R code for visualizing it
Python
99
star
31

chatgpt_api_test

Demos utilizing the ChatGPT API
Jupyter Notebook
96
star
32

gpt-3-client

A client for OpenAI's GPT-3 API for ad hoc testing of prompt without using the web interface.
Python
90
star
33

stable-diffusion-negative-prompt

Jupyter Notebooks for experimenting with negative prompting with Stable Diffusion 2.0.
Jupyter Notebook
87
star
34

stylistic-word-clouds

Python scripts for creating stylistic word clouds
Python
85
star
35

gpt3-blog-title-optimizer

Python code for building a GPT-3 based technical blog post optimizer.
Jupyter Notebook
83
star
36

amazon-spark

R Code + R Notebook for analyzing millions of Amazon reviews using Apache Spark
HTML
83
star
37

twcloud

Python package + CLI to generate wordclouds of Twitter tweets.
Python
76
star
38

twitter-cloud-run

A (relatively) minimal configuration app to run Twitter bots on a schedule that can scale to unlimited bots.
Python
76
star
39

deep-learning-cpu-gpu-benchmark

Repository to benchmark the performance of Cloud CPUs vs. Cloud GPUs on TensorFlow and Google Compute Engine.
HTML
67
star
40

get-profile-data-of-repo-stargazers

This repository contains a script used to get the GitHub profile information of all the people who've Stared a given GitHub repository
Python
67
star
41

icon-image

Python script to quickly generate a Font Awesome icon imposed on a background for steering AI image generation.
Python
53
star
42

gpt-j-6b-experiments

Test prompts for GPT-J-6B and the resulting AI-generated texts
53
star
43

ml-data-generator

Python script to generate fake datasets optimized for testing machine learning/deep learning workflows
Python
51
star
44

hacker-news-download-all-stories

Download *ALL* the submissions from Hacker News
Python
51
star
45

clickbait-cluster

Code + Jupyter Notebooks for Visualizing Clusters of Clickbait Headlines Using Spark, Word2vec, and Plotly
HTML
47
star
46

keras-cntk-docker

Docker container for keras + cntk intended for nvidia-docker
Python
42
star
47

foursquare-venue-scraper

A Foursquare data scraper that gathers all venues within a specified geographic area.
Python
39
star
48

interactive-facebook-reactions

Jupyter notebook + Code for processing Facebook Reactions data and making Interactive Charts
HTML
38
star
49

youtube-video-scraper

Tools for scraping YouTube video metadata (mostly for training AI on video titles)
Python
38
star
50

nyc-taxi-notebook

R Code + Jupyter notebook for analyzing and visualizing NYC Taxi data
R
31
star
51

sdxl-experiments

Jupyter Notebooks for experimenting with Stable Diffusion XL 1.0
Jupyter Notebook
30
star
52

yelp-review-analysis

Repository containing script on how I processed and charted Yelp data.
R
29
star
53

langchain-problems

Demos of some issues with LangChain.
Jupyter Notebook
29
star
54

subreddit-generator

Train a neural network optimized for generating Reddit subreddit posts
Python
28
star
55

predict-reddit-submission-success

Repository w/ Jupyter + R Notebooks for creating a model to predict the success of Reddit submissions with Keras.
HTML
28
star
56

autotweet-from-googlesheet

A minimal proof-of-concept Python script to tweet human-curated Tweets on a schedule.
Python
27
star
57

tritonize

Convert images to a styled, minimal representation, quickly with NumPy
Python
27
star
58

keras-cntk-benchmark

Code for Benchmarking CNTK performance on Keras vs. TensorFlow
Python
26
star
59

frames-to-gif-osx

An application that allows the user to easily convert frames to very-high-quality GIFs on OS X.
26
star
60

minimaxir.github.io

Blog Posts and Theme for https://minimaxir.com
HTML
25
star
61

ggplot-tutorial

Repository for ggplot2 tutorial
R
24
star
62

legaladvice-gpt2

Dump of generated texts from GPT-2 trained on /r/legaladvice subreddit titles
23
star
63

chatgpt-structured-data

Demos of ChatGPT's function calling/structured data support.
Jupyter Notebook
22
star
64

sf-arrests-when-where

R Code + Jupyter notebook for replicating analysis of when and where arrests in San Francisco occur.
R
22
star
65

pokemon-3d

Code + Visualizations processing and visualizing Pokémon data in 3D
HTML
21
star
66

reddit-gpt-2-cloud-run

Reddit title generator API based on GPT-2
HTML
20
star
67

facebook-keyword-regression-analysis

Regression Analysis for Facebook keywords.
R
20
star
68

chatgpt-tips-analysis

Jupyter Notebooks for testing the impact of tip incentives for ChatGPT
Jupyter Notebook
20
star
69

stylecloud-examples

Examples of stylistic word clouds generated via the stylecloud Python package
Python
19
star
70

stack-overflow-survey

Code + Visualizations for processing 2016 Stack Overflow Survey Data
Jupyter Notebook
19
star
71

get-heart-rate-csv

A small Python script to get the heart rate data generated from an Apple Watch in a CSV form
Python
19
star
72

get-bars-from-foursquare

A quick pair of Python scripts to retrieve all bars within a given area, then retrieve metadata and process it.
Python
19
star
73

subreddit-related

Code and visualizations for related/similar subreddits
Jupyter Notebook
19
star
74

ai-generated-magic-cards

Tools for encoding Magic: The Gathering cards into a form suitable for AI text generation
Python
17
star
75

tensorflow-multiprocess-ray

Proof of concept on how to use TensorFlow for prediction tasks in a multiprocess setting.
Python
17
star
76

pokemon-ai

A text-generating AI to generate Pokémon names.
Python
17
star
77

reddit-comment-length

R code needed to reproduce Relationship between Reddit Comment Score and Comment Length for 1.66 Billion Comments visualization
R
17
star
78

mtg-card-creator-api

Code for running a Magic card image generator API
Python
16
star
79

automl-gs-examples

Examples + Visualizations of datasets modeled using automl-gs
Python
16
star
80

reddit-graph

Jupyter notebook + Code for reproducing Reddit Subreddit graphs
Jupyter Notebook
16
star
81

ncaa-basketball

R Code + R Notebook on how to process and visualize NCAA basketball data.
R
16
star
82

pokemon-embeddings

Jupyter Notebooks and an R Notebook for encoding Pokémon embeddings and creating data visualizations.
Jupyter Notebook
16
star
83

sfba-compensation

Jupyter notebook + Code for scraping AngelList data and making an interactive chart of SFBA salaries/equity
HTML
14
star
84

resetera-gpt-2

Scraper of ResetEra threads and posts to get them into a format suitable for feeding them into GPT-2.
Python
14
star
85

get-data-from-photos-from-instagram-tags

Processes data from images which are tagged with the specified Instagram tag.
Python
13
star
86

hacker-news-comment-analysis

Code used for analysis of Hacker News comments.
R
13
star
87

char-tsne-visualization

Visualizations of character embeddings from derived character vectors.
HTML
13
star
88

imdb-data-analysis

R Code + R Notebook on how to process and visualize the official IMDb datasets.
12
star
89

hn-heatmaps

Code and data necessary to reproduce heatmaps relating HN Submission time to submission score.
R
12
star
90

sf-crimes-covid

Spot checking impact of SF shelter-in-places on crime reporting.
12
star
91

imgur-decline

R Code + R Notebook for analyzing the decline of Imgur on Reddit.
HTML
11
star
92

gpt-2-fanfiction

Experiments with generating GPT-2 fanfiction on specified topics.
11
star
93

notebooks

This GitHub Repository stores my R Notebooks, allowing GitHub Pages to serve the R Notebooks on my website
HTML
11
star
94

all-marvel-comics-characters

Creates a .csv of all Marvel Comics Characters + Statistics via the Marvel API
Python
10
star
95

movie-gender

Data and code for analyzing Movie Lead Gender.
Jupyter Notebook
10
star
96

online-class-charts

Code needed to reproduce data analysis and charts for MIT/Harvard Online Course Data
R
9
star
97

ggplot2-web

R Code + R Notebook on how to make high quality data visualizations on the web with ggplot2.
HTML
9
star
98

reddit-subreddit-keywords

Code + Jupyter notebook for analyzing and visualizing means and medians of keywords in the top Reddit Subreddits.
R
8
star
99

reddit-mean-score

Quick data visualization for Reddit Mean Submission Score by Subreddit
8
star
100

sf-arrests-predict

R Code + R Notebook for predicting arrest types in San Francisco.
HTML
8
star