• Stars
    star
    108
  • Rank 321,259 (Top 7 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created almost 2 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Uncertainty quantification for deep learning models in PyTorch 🌱

Torch Uncertainty Logo

pypi tests Docs Ruff Code Coverage Discord Badge

TorchUncertainty is a package designed to help you leverage uncertainty quantification techniques and make your deep neural networks more reliable. It aims at being collaborative and including as many methods as possible, so reach out to add yours!

🚧 TorchUncertainty is in early development 🚧 - expect changes, but reach out and contribute if you are interested in the project! Please raise an issue if you have any bugs or difficulties and join the discord server.


This package provides a multi-level API, including:

  • ready-to-train baselines on research datasets, such as ImageNet and CIFAR
  • deep learning baselines available for training on your datasets
  • pretrained weights for these baselines on ImageNet and CIFAR (work in progress 🚧).
  • layers available for use in your networks
  • scikit-learn style post-processing methods such as Temperature Scaling

See the Reference page or the API reference for a more exhaustive list of the implemented methods, datasets, metrics, etc.

Installation

Install the desired PyTorch version in your environment. Then, install the package from PyPI:

pip install torch-uncertainty

If you aim to contribute, have a look at the contribution page.

Getting Started and Documentation

Please find the documentation at torch-uncertainty.github.io.

A quickstart is available at torch-uncertainty.github.io/quickstart.

Implemented methods

Baselines

To date, the following deep learning baselines have been implemented:

  • Deep Ensembles
  • MC-Dropout - Tutorial
  • BatchEnsemble
  • Masksembles
  • MIMO
  • Packed-Ensembles (see blog post) - Tutorial
  • Bayesian Neural Networks 🚧 Work in progress 🚧 - Tutorial
  • Regression with Beta Gaussian NLL Loss
  • Deep Evidential Classification & Regression - Tutorial

Augmentation methods

The following data augmentation methods have been implemented:

  • Mixup, MixupIO, RegMixup, WarpingMixup

Post-processing methods

To date, the following post-processing methods have been implemented:

  • Temperature, Vector, & Matrix scaling - Tutorial

Tutorials

We provide the following tutorials in our documentation:

Awesome Uncertainty repositories

You may find a lot of papers about modern uncertainty estimation techniques on the Awesome Uncertainty in Deep Learning.

Other References

This package also contains the official implementation of Packed-Ensembles.

If you find the corresponding models interesting, please consider citing our paper:

@inproceedings{laurent2023packed,
    title={Packed-Ensembles for Efficient Uncertainty Estimation},
    author={Laurent, Olivier and Lafage, Adrien and Tartaglione, Enzo and Daniel, Geoffrey and Martinez, Jean-Marc and Bursuc, Andrei and Franchi, Gianni},
    booktitle={ICLR},
    year={2023}
}