• Stars
    star
    1,285
  • Rank 36,615 (Top 0.8 %)
  • Language
    Jupyter Notebook
  • License
    BSD 3-Clause "New...
  • Created over 3 years ago
  • Updated about 1 month ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A scikit-learn-compatible module to estimate prediction intervals and control risks based on conformal predictions.

GitHubActions Codecov ReadTheDocs License PythonVersion PyPi Conda Release Commits DOI

https://github.com/simai-ml/MAPIE/raw/master/doc/images/mapie_logo_nobg_cut.png

MAPIE - Model Agnostic Prediction Interval Estimator

MAPIE allows you to easily estimate prediction intervals (or prediction sets) using your favourite scikit-learn-compatible model for single-output regression or multi-class classification settings.

Prediction intervals output by MAPIE encompass both aleatoric and epistemic uncertainties and are backed by strong theoretical guarantees thanks to conformal prediction methods [1-7].

๐Ÿ”— Requirements

Python 3.7+

MAPIE stands on the shoulders of giants.

Its only internal dependencies are scikit-learn and numpy=>1.21.

๐Ÿ›  Installation

Install via pip:

$ pip install mapie

or via conda:

$ conda install -c conda-forge mapie

To install directly from the github repository :

$ pip install git+https://github.com/scikit-learn-contrib/MAPIE

โšก๏ธ Quickstart

Let us start with a basic regression problem. Here, we generate one-dimensional noisy data that we fit with a linear model.

import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.datasets import make_regression

regressor = LinearRegression()
X, y = make_regression(n_samples=500, n_features=1, noise=20, random_state=59)

Since MAPIE is compliant with the standard scikit-learn API, we follow the standard sequential fit and predict process like any scikit-learn regressor. We set two values for alpha to estimate prediction intervals at approximately one and two standard deviations from the mean.

from mapie.regression import MapieRegressor
alpha = [0.05, 0.32]
mapie = MapieRegressor(regressor)
mapie.fit(X, y)
y_pred, y_pis = mapie.predict(X, alpha=alpha)

MAPIE returns a np.ndarray of shape (n_samples, 3, len(alpha)) giving the predictions, as well as the lower and upper bounds of the prediction intervals for the target quantile for each desired alpha value.

You can compute the coverage of your prediction intervals.

from mapie.metrics import regression_coverage_score
coverage_scores = [
    regression_coverage_score(y, y_pis[:, 0, i], y_pis[:, 1, i])
    for i, _ in enumerate(alpha)
]

The estimated prediction intervals can then be plotted as follows.

from matplotlib import pyplot as plt
plt.xlabel("x")
plt.ylabel("y")
plt.scatter(X, y, alpha=0.3)
plt.plot(X, y_pred, color="C1")
order = np.argsort(X[:, 0])
plt.plot(X[order], y_pis[order][:, 0, 1], color="C1", ls="--")
plt.plot(X[order], y_pis[order][:, 1, 1], color="C1", ls="--")
plt.fill_between(
    X[order].ravel(),
    y_pis[order][:, 0, 0].ravel(),
    y_pis[order][:, 1, 0].ravel(),
    alpha=0.2
)
plt.title(
    f"Target and effective coverages for "
    f"alpha={alpha[0]:.2f}: ({1-alpha[0]:.3f}, {coverage_scores[0]:.3f})\n"
    f"Target and effective coverages for "
    f"alpha={alpha[1]:.2f}: ({1-alpha[1]:.3f}, {coverage_scores[1]:.3f})"
)
plt.show()

The title of the plot compares the target coverages with the effective coverages. The target coverage, or the confidence interval, is the fraction of true labels lying in the prediction intervals that we aim to obtain for a given dataset. It is given by the alpha parameter defined in MapieRegressor, here equal to 0.05 and 0.32, thus giving target coverages of 0.95 and 0.68. The effective coverage is the actual fraction of true labels lying in the prediction intervals.

https://github.com/simai-ml/MAPIE/raw/master/doc/images/quickstart_1.png

๐Ÿ“˜ Documentation

The full documentation can be found on this link.

How does MAPIE work?

It is basically based on two types of techniques:

Cross conformal predictions

  • Conformity scores on the whole training set obtained by cross-validation,
  • Perturbed models generated during the cross-validation.

MAPIE then combines all these elements in a way that provides prediction intervals on new data with strong theoretical guarantees [1-2].

https://github.com/simai-ml/MAPIE/raw/master/doc/images/mapie_internals_regression.png

Split conformal predictions

  • Construction of a conformity score
  • Calibration of the conformity score on a calibration set not seen by the model during training

MAPIE then uses the calibrated conformity scores to estimate sets of labels associated with the desired coverage on new data with strong theoretical guarantees [3-4-5].

https://github.com/simai-ml/MAPIE/raw/master/doc/images/mapie_internals_classification.png

๐Ÿ“ Contributing

You are welcome to propose and contribute new ideas. We encourage you to open an issue so that we can align on the work to be done. It is generally a good idea to have a quick discussion before opening a pull request that is potentially out-of-scope. For more information on the contribution process, please go here.

๐Ÿค Affiliations

MAPIE has been developed through a collaboration between Quantmetry, Michelin, ENS Paris-Saclay, and with the financial support from Rรฉgion Ile de France and Confiance.ai.

Quantmetry Michelin ENS Confiance.ai IledeFrance

๐Ÿ” References

MAPIE methods belong to the field of conformal inference.

[1] Rina Foygel Barber, Emmanuel J. Candรจs, Aaditya Ramdas, and Ryan J. Tibshirani. "Predictive inference with the jackknife+." Ann. Statist., 49(1):486โ€“507, February 2021.

[2] Byol Kim, Chen Xu, and Rina Foygel Barber. "Predictive Inference Is Free with the Jackknife+-after-Bootstrap." 34th Conference on Neural Information Processing Systems (NeurIPS 2020).

[3] Mauricio Sadinle, Jing Lei, and Larry Wasserman. "Least Ambiguous Set-Valued Classifiers With Bounded Error Levels." Journal of the American Statistical Association, 114:525, 223-234, 2019.

[4] Yaniv Romano, Matteo Sesia and Emmanuel J. Candรจs. "Classification with Valid and Adaptive Coverage." NeurIPS 2020 (spotlight).

[5] Anastasios Nikolas Angelopoulos, Stephen Bates, Michael Jordan and Jitendra Malik. "Uncertainty Sets for Image Classifiers using Conformal Prediction." International Conference on Learning Representations 2021.

[6] Yaniv Romano, Evan Patterson, Emmanuel J. Candรจs. "Conformalized Quantile Regression." Advances in neural information processing systems 32 (2019).

[7] Chen Xu and Yao Xie. "Conformal Prediction Interval for Dynamic Time-Series." International Conference on Machine Learning (ICML, 2021).

[8] Lihua Lei Jitendra Malik Stephen Bates, Anastasios Angelopoulos and Michael I. Jordan. Distribution-free, risk-controlling prediction sets. CoRR, abs/2101.02703, 2021. URL https://arxiv.org/abs/2101.02703.39

[9] Angelopoulos, Anastasios N., Stephen, Bates, Adam, Fisch, Lihua, Lei, and Tal, Schuster. "Conformal Risk Control." (2022).

๐Ÿ“ License

MAPIE is free and open-source software licensed under the 3-clause BSD license.

More Repositories

1

imbalanced-learn

A Python Package to Tackle the Curse of Imbalanced Datasets in Machine Learning
Python
6,549
star
2

sklearn-pandas

Pandas integration with sklearn
Python
2,803
star
3

hdbscan

A high performance implementation of HDBSCAN clustering.
Jupyter Notebook
2,795
star
4

category_encoders

A library of sklearn compatible categorical variable encoders
Python
2,405
star
5

lightning

Large-scale linear classification, regression and ranking in Python
Python
1,716
star
6

boruta_py

Python implementations of the Boruta all-relevant feature selection method.
Python
1,474
star
7

metric-learn

Metric learning algorithms in Python
Python
1,346
star
8

skope-rules

machine learning with logical rules in Python
Jupyter Notebook
541
star
9

DESlib

A Python library for dynamic classifier and ensemble selection
Python
479
star
10

py-earth

A Python implementation of Jerome Friedman's Multivariate Adaptive Regression Splines
Python
444
star
11

scikit-learn-contrib

scikit-learn compatible projects
400
star
12

project-template

A template for scikit-learn extensions
Python
316
star
13

forest-confidence-interval

Confidence intervals for scikit-learn forest algorithms
HTML
282
star
14

polylearn

A library for factorization machines and polynomial networks for classification and regression in Python.
Python
245
star
15

stability-selection

scikit-learn compatible implementation of stability selection.
Python
195
star
16

skglm

Fast and modular sklearn replacement for generalized linear models
Python
157
star
17

scikit-learn-extra

scikit-learn contrib estimators
Python
155
star
18

qolmat

A scikit-learn-compatible module for comparing imputation methods.
Python
134
star
19

hiclass

A python library for hierarchical classification compatible with scikit-learn
Python
113
star
20

scikit-dimension

A Python package for intrinsic dimension estimation
Python
78
star
21

scikit-matter

A collection of scikit-learn compatible utilities that implement methods born out of the materials science and chemistry communities
Python
76
star
22

skdag

A more flexible alternative to scikit-learn Pipelines
Python
29
star
23

denmune-clustering-algorithm

DenMune a clustering algorithm that can find clusters of arbitrary size, shapes and densities in two-dimensions. Higher dimensions are first reduced to 2-D using the t-sne. The algorithm relies on a single parameter K (the number of nearest neighbors). The results show the superiority of DenMune. Enjoy the simplicty but the power of DenMune.
Jupyter Notebook
29
star
24

mimic

mimic calibration
Python
21
star
25

sklearn-ann

Integration with (approximate) nearest neighbors libraries for scikit-learn + clustering based on with kNN-graphs.
Python
14
star
26

scikit-learn-contrib.github.io

Project webpage
HTML
4
star