• Stars
    star
    108
  • Rank 321,259 (Top 7 %)
  • Language
    Jupyter Notebook
  • License
    Apache License 2.0
  • Created over 6 years ago
  • Updated almost 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A community repository for benchmarking Bayesian methods

Bayesian Benchmarks

Build Status codecov

This is a set of tools for evaluating Bayesian models, together with benchmark implementations and results.

Motivations:

  • There is a lack of standardized tasks that meaningfully assess the quality of uncertainty quantification for Bayesian black-box models.
  • Variations between tasks in the literature make a direct comparison between methods difficult.
  • Implementing competing methods takes considerable effort, and there little incentive to do a good job.
  • Published papers may not always provide complete details of implementations due to space considerations.

Aims:

  • Curate a set of benchmarks that meaningfully compare the efficacy of Bayesian models in real-world tasks.
  • Maintain a fair assessment of benchmark methods, with full implementations and results.

Tasks:

  • Classification and regression
  • Density estimation (real world and synthetic) (TODO)
  • Active learning
  • Adversarial robustness (TODO)

Current implementations:

See the models folder for instruction for adding new models.

Coming soon: