• Stars
    star
    9,545
  • Rank 3,601 (Top 0.08 %)
  • Language
    Python
  • License
    GNU Lesser Genera...
  • Created over 8 years ago
  • Updated about 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Master Build Status - Mac/Linux Master Build Status - Windows Master Coverage Status

Development status: Development Build Status - Mac/Linux Development Build Status - Windows Development Coverage Status

Package information: Python 3.7 License: LGPL v3 PyPI version


To try the NEW! TPOT2 (alpha) please go here!


TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assistant. TPOT is a Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

TPOT Demo

TPOT will automate the most tedious part of machine learning by intelligently exploring thousands of possible pipelines to find the best one for your data.

An example Machine Learning pipeline

An example Machine Learning pipeline

Once TPOT is finished searching (or you get tired of waiting), it provides you with the Python code for the best pipeline it found so you can tinker with the pipeline from there.

An example TPOT pipeline

TPOT is built on top of scikit-learn, so all of the code it generates should look familiar... if you're familiar with scikit-learn, anyway.

TPOT is still under active development and we encourage you to check back on this repository regularly for updates.

For further information about TPOT, please see the project documentation.

License

Please see the repository license for the licensing and usage information for TPOT.

Generally, we have licensed TPOT to make it as widely usable as possible.

Installation

We maintain the TPOT installation instructions in the documentation. TPOT requires a working installation of Python.

Usage

TPOT can be used on the command line or with Python code.

Click on the corresponding links to find more information on TPOT usage in the documentation.

Examples

Classification

Below is a minimal working example with the optical recognition of handwritten digits dataset.

from tpot import TPOTClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split

digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target,
                                                    train_size=0.75, test_size=0.25, random_state=42)

tpot = TPOTClassifier(generations=5, population_size=50, verbosity=2, random_state=42)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('tpot_digits_pipeline.py')

Running this code should discover a pipeline that achieves about 98% testing accuracy, and the corresponding Python code should be exported to the tpot_digits_pipeline.py file and look similar to the following:

import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline, make_union
from sklearn.preprocessing import PolynomialFeatures
from tpot.builtins import StackingEstimator
from tpot.export_utils import set_param_recursive

# NOTE: Make sure that the outcome column is labeled 'target' in the data file
tpot_data = pd.read_csv('PATH/TO/DATA/FILE', sep='COLUMN_SEPARATOR', dtype=np.float64)
features = tpot_data.drop('target', axis=1)
training_features, testing_features, training_target, testing_target = \
            train_test_split(features, tpot_data['target'], random_state=42)

# Average CV score on the training set was: 0.9799428471757372
exported_pipeline = make_pipeline(
    PolynomialFeatures(degree=2, include_bias=False, interaction_only=False),
    StackingEstimator(estimator=LogisticRegression(C=0.1, dual=False, penalty="l1")),
    RandomForestClassifier(bootstrap=True, criterion="entropy", max_features=0.35000000000000003, min_samples_leaf=20, min_samples_split=19, n_estimators=100)
)
# Fix random state for all the steps in exported pipeline
set_param_recursive(exported_pipeline.steps, 'random_state', 42)

exported_pipeline.fit(training_features, training_target)
results = exported_pipeline.predict(testing_features)

Regression

Similarly, TPOT can optimize pipelines for regression problems. Below is a minimal working example with the practice Boston housing prices data set.

from tpot import TPOTRegressor
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split

housing = load_boston()
X_train, X_test, y_train, y_test = train_test_split(housing.data, housing.target,
                                                    train_size=0.75, test_size=0.25, random_state=42)

tpot = TPOTRegressor(generations=5, population_size=50, verbosity=2, random_state=42)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('tpot_boston_pipeline.py')

which should result in a pipeline that achieves about 12.77 mean squared error (MSE), and the Python code in tpot_boston_pipeline.py should look similar to:

import numpy as np
import pandas as pd
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import PolynomialFeatures
from tpot.export_utils import set_param_recursive

# NOTE: Make sure that the outcome column is labeled 'target' in the data file
tpot_data = pd.read_csv('PATH/TO/DATA/FILE', sep='COLUMN_SEPARATOR', dtype=np.float64)
features = tpot_data.drop('target', axis=1)
training_features, testing_features, training_target, testing_target = \
            train_test_split(features, tpot_data['target'], random_state=42)

# Average CV score on the training set was: -10.812040755234403
exported_pipeline = make_pipeline(
    PolynomialFeatures(degree=2, include_bias=False, interaction_only=False),
    ExtraTreesRegressor(bootstrap=False, max_features=0.5, min_samples_leaf=2, min_samples_split=3, n_estimators=100)
)
# Fix random state for all the steps in exported pipeline
set_param_recursive(exported_pipeline.steps, 'random_state', 42)

exported_pipeline.fit(training_features, training_target)
results = exported_pipeline.predict(testing_features)

Check the documentation for more examples and tutorials.

Contributing to TPOT

We welcome you to check the existing issues for bugs or enhancements to work on. If you have an idea for an extension to TPOT, please file a new issue so we can discuss it.

Before submitting any contributions, please review our contribution guidelines.

Having problems or have questions about TPOT?

Please check the existing open and closed issues to see if your issue has already been attended to. If it hasn't, file a new issue on this repository so we can review your issue.

Citing TPOT

If you use TPOT in a scientific publication, please consider citing at least one of the following papers:

Trang T. Le, Weixuan Fu and Jason H. Moore (2020). Scaling tree-based automated machine learning to biomedical big data with a feature set selector. Bioinformatics.36(1): 250-256.

BibTeX entry:

@article{le2020scaling,
  title={Scaling tree-based automated machine learning to biomedical big data with a feature set selector},
  author={Le, Trang T and Fu, Weixuan and Moore, Jason H},
  journal={Bioinformatics},
  volume={36},
  number={1},
  pages={250--256},
  year={2020},
  publisher={Oxford University Press}
}

Randal S. Olson, Ryan J. Urbanowicz, Peter C. Andrews, Nicole A. Lavender, La Creis Kidd, and Jason H. Moore (2016). Automating biomedical data science through tree-based pipeline optimization. Applications of Evolutionary Computation, pages 123-137.

BibTeX entry:

@inbook{Olson2016EvoBio,
    author={Olson, Randal S. and Urbanowicz, Ryan J. and Andrews, Peter C. and Lavender, Nicole A. and Kidd, La Creis and Moore, Jason H.},
    editor={Squillero, Giovanni and Burelli, Paolo},
    chapter={Automating Biomedical Data Science Through Tree-Based Pipeline Optimization},
    title={Applications of Evolutionary Computation: 19th European Conference, EvoApplications 2016, Porto, Portugal, March 30 -- April 1, 2016, Proceedings, Part I},
    year={2016},
    publisher={Springer International Publishing},
    pages={123--137},
    isbn={978-3-319-31204-0},
    doi={10.1007/978-3-319-31204-0_9},
    url={http://dx.doi.org/10.1007/978-3-319-31204-0_9}
}

Randal S. Olson, Nathan Bartley, Ryan J. Urbanowicz, and Jason H. Moore (2016). Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science. Proceedings of GECCO 2016, pages 485-492.

BibTeX entry:

@inproceedings{OlsonGECCO2016,
    author = {Olson, Randal S. and Bartley, Nathan and Urbanowicz, Ryan J. and Moore, Jason H.},
    title = {Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science},
    booktitle = {Proceedings of the Genetic and Evolutionary Computation Conference 2016},
    series = {GECCO '16},
    year = {2016},
    isbn = {978-1-4503-4206-3},
    location = {Denver, Colorado, USA},
    pages = {485--492},
    numpages = {8},
    url = {http://doi.acm.org/10.1145/2908812.2908918},
    doi = {10.1145/2908812.2908918},
    acmid = {2908918},
    publisher = {ACM},
    address = {New York, NY, USA},
}

Alternatively, you can cite the repository directly with the following DOI:

DOI

Support for TPOT

TPOT was developed in the Computational Genetics Lab at the University of Pennsylvania with funding from the NIH under grant R01 AI117694. We are incredibly grateful for the support of the NIH and the University of Pennsylvania during the development of this project.

The TPOT logo was designed by Todd Newmuis, who generously donated his time to the project.

More Repositories

1

pmlb

PMLB: A large, curated repository of benchmark datasets for evaluating supervised machine learning algorithms.
Python
790
star
2

KRAGEN

Software to implement GoT with a weviate vectorized database
Python
459
star
3

scikit-rebate

A scikit-learn-compatible Python implementation of ReBATE, a suite of Relief-based feature selection algorithms for Machine Learning.
Python
401
star
4

Aliro

Aliro: AI-Driven Data Science
JavaScript
222
star
5

ClinicalDataSources

Open or Easy Access Clinical Data Sources for Biomedical Research
167
star
6

tpot2

A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.
Jupyter Notebook
156
star
7

scikit-mdr

A sklearn-compatible Python implementation of Multifactor Dimensionality Reduction (MDR) for feature construction.
Python
125
star
8

ReBATE

Relief Based Algorithms of ReBATE implemented in Python with Cython optimization. This repository is no longer being updated. Please see scikit-rebate.
Python
32
star
9

MIMIC_trajectories

Jupyter Notebook
30
star
10

digen

Diverse and generative ML benchmarks
Jupyter Notebook
14
star
11

ml-analyst

Analysis pipeline for quick ML analyses.
Python
11
star
12

interpret_ehr

Interpretation of machine learning predictions for patient outcomes in electronic health records
Jupyter Notebook
9
star
13

imputation

https://www.biorxiv.org/content/early/2017/07/24/167858
Jupyter Notebook
9
star
14

3DHeatmap

3D Heatmap tool in Unity3D. Inst. for Biomed. Informatics, Univ. of PA
C#
9
star
15

hibachi

Data simulation software that creates data sets with particular characteristics
Python
8
star
16

autoqtl

Automated Quantitative Trait Locus Analysis (AutoQTL)
Python
7
star
17

EBIC.jl

EBIC - a biclustering algorithm in Julia
Julia
7
star
18

EpistasisLab.github.io

Identifying the complex genetic architectures of disease
6
star
19

qsar-gnn

TeX
6
star
20

SAFE

The SAFE Algorithm: Solution and Fitness Evolution
Python
4
star
21

DTox

A knowledge-guided deep learning model for prediction and interpretation of drug toxicity
HTML
4
star
22

AlzKB

Python
4
star
23

regens

Recombines real genomic segments to simulate whole genomes
Python
4
star
24

rebate-benchmark

A centralized repository to benchmark ReBATE performance across a variety of parameter settings and datasets.
Jupyter Notebook
4
star
25

gecco2017-new-benchmarking-standards

Pages for workshops run by Epistasis Lab members
3
star
26

evolved-stats

A research project using genetic programming to discover and optimize statistical tests.
Jupyter Notebook
3
star
27

penn-lpc-scripts

Scripts that I wrote for use on the Penn LPC
Python
2
star
28

LPC

Documentation and informational resources for LPC use
2
star
29

tpot-kaggle

tpot applications on kaggle datasets
Jupyter Notebook
2
star
30

OMNIREP

The OMNIREP algorithm: Coevolving encodings and representations
Python
2
star
31

SimpleModelView

A Unity package for a simplified model-view system to help with code<->UI management.
ShaderLab
2
star
32

GPT4_and_Review

Using GPT-4 to write a scientific review article: a pilot evaluation study
Python
2
star
33

regens-analysis

Shell
1
star
34

PICV

Proportional instance cross validation
1
star
35

PennAI-Ed

Materials for PennAI-Ed - an initiative to enhance AI and data science education built on PennAI
Shell
1
star
36

TINY

Tiny Genetic Algorithm (GA) and Tiny Genetic Programming (GP)
Python
1
star
37

EVE

ENSEMBL VEP on EC2
1
star
38

pennai-arm64-deps

Pre-compiled dependencies for building PennAI Docker images on 64-bit ARM OSs (e.g., raspberry pi)
1
star
39

Conservation-Machine-Learning

Conservation Machine Learning
Python
1
star
40

latent_phenotype_project

Python
1
star
41

PAGER

Python
1
star
42

epistasis_detection

Implementation of an efficient algorithm to compute linear regression models for epistasis that permit varied genetic encodings (penetrance functions) of the interactions of loci and provide statistical evidence for epistasis.
Jupyter Notebook
1
star
43

VEPDB_populator

Population utilities for the VEPDB distributed annotation database, with an annotator written in Python
Python
1
star