• Stars
    star
    1,266
  • Rank 37,161 (Top 0.8 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 7 years ago
  • Updated over 6 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A web-based application for quick, scalable, and automated hyperparameter tuning and stacked ensembling in Python.

Xcessiv

PyPI license PyPI Build Status

Xcessiv is a tool to help you create the biggest, craziest, and most excessive stacked ensembles you can think of.

Stacked ensembles are simple in theory. You combine the predictions of smaller models and feed those into another model. However, in practice, implementing them can be a major headache.

Xcessiv holds your hand through all the implementation details of creating and optimizing stacked ensembles so you're free to fully define only the things you care about.

The Xcessiv process

Define your base learners and performance metrics

define_base_learner

Keep track of hundreds of different model-hyperparameter combinations

list_base_learner

Effortlessly choose your base learners and create an ensemble with the click of a button

ensemble

Features

  • Fully define your data source, cross-validation process, relevant metrics, and base learners with Python code
  • Any model following the Scikit-learn API can be used as a base learner
  • Task queue based architecture lets you take full advantage of multiple cores and embarrassingly parallel hyperparameter searches
  • Direct integration with TPOT for automated pipeline construction
  • Automated hyperparameter search through Bayesian optimization
  • Easy management and comparison of hundreds of different model-hyperparameter combinations
  • Automatic saving of generated secondary meta-features
  • Stacked ensemble creation in a few clicks
  • Automated ensemble construction through greedy forward model selection
  • Export your stacked ensemble as a standalone Python file to support multiple levels of stacking

Installation and Documentation

You can find installation instructions and detailed documentation hosted here.

FAQ

Where does Xcessiv fit in the machine learning process?

Xcessiv fits in the model building part of the process after data preparation and feature engineering. At this point, there is no universally acknowledged way of determining which algorithm will work best for a particular dataset (see No Free Lunch Theorem), and while heuristic optimization methods do exist, things often break down into trial and error as you try to find the best model-hyperparameter combinations.

Stacking is an almost surefire method to improve performance beyond that of any single model, however, the complexity of proper implementation often makes it impractical to apply them in practice outside of Kaggle competitions. Xcessiv aims to make the construction of stacked ensembles as painless as possible and lower the barrier for entry.

I don't care about fancy stacked ensembles and what not, should I still use Xcessiv?

Absolutely! Even without the ensembling functionality, the sheer amount of utility provided by keeping track of the performance of hundreds, and even thousands of ML models and hyperparameter combinations is a huge boon.

How does Xcessiv generate meta-features for stacking?

You can choose whether to generate meta-features through cross-validation (stacked generalization) or with a holdout set (blending). You can read about these two methods and a lot more about stacked ensembles in the Kaggle Ensembling Guide. It's a great article and provides most of the inspiration for this project.

Contributing

Xcessiv is in its very early stages and needs the open-source community to guide it along.

There are many ways to contribute to Xcessiv. You could report a bug, suggest a feature, submit a pull request, improve documentation, and many more.

If you would like to contribute something, please visit our Contributor Guidelines.

Project Status

Xcessiv is currently in alpha and is unstable. Future versions are not guaranteed to be backwards-compatible with current project files.

More Repositories

1

scikit-plot

An intuitive library to add plotting functionality to scikit-learn objects.
Python
2,401
star
2

fast-style-transfer-deeplearnjs

Demo of in-browser Fast Neural Style Transfer with deeplearn.js library
TypeScript
1,352
star
3

arbitrary-image-stylization-tfjs

Arbitrary style transfer using TensorFlow.js
JavaScript
1,184
star
4

gan-playground

GAN Playground - Experiment with Generative Adversarial Nets in your browser. An introduction to GANs.
TypeScript
347
star
5

neural-painters

Jupyter Notebook
262
star
6

neural-painters-pytorch

PyTorch library for "Neural Painters: A learned differentiable constraint for generating brushstroke paintings"
Jupyter Notebook
139
star
7

invariant-risk-minimization

Implementation of Invariant Risk Minimization https://arxiv.org/abs/1907.02893
Jupyter Notebook
84
star
8

tfjs-lstm-text-generation

Generate your own Nietschze quote using TensorFlow.js
JavaScript
15
star
9

tfjs-rock-paper-scissors

Play Rock Paper Scissors with your webcam using TensorFlow.js
JavaScript
14
star
10

adversarially-robust-neural-style-transfer

Jupyter Notebook
8
star
11

gnuradio-go

Python
6
star
12

tfjs-autoencoder

A denoising autoencoder written in TensorFlow.js
HTML
6
star
13

pix2pix-runway

Python
5
star
14

my-resume

Here be my resume.
5
star
15

wifisniffer

Automatically decrypts and displays WiFi packets captured from the air. This is unfinished and no longer maintained.
Python
3
star
16

post--robust-neural-style

JavaScript
1
star
17

msk-cancer

Repository for storing some of my notebooks for Kaggle's competition https://www.kaggle.com/c/msk-redefining-cancer-treatment/
Jupyter Notebook
1
star
18

KDD-2010-mini-project

Just a messy filedump of all my ipython notebooks for "competing" in the KDD 2010 Cup Challenge
Jupyter Notebook
1
star
19

personal_exercises

Short scripts I write for practicing software engineering interviews
Python
1
star