• Stars
    star
    160
  • Rank 234,703 (Top 5 %)
  • Language
  • Created about 3 years ago
  • Updated 6 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Materials for STAT 991: Topics In Modern Statistical Learning (UPenn, 2022 Spring) - uncertainty quantification, conformal prediction, calibration, etc

STAT 991: Topics In Modern Statistical Learning (UPenn, 2022 Spring)

This class surveys advanced topics in statistical learning based on student presentations.

The core topic of the course is uncertainty quantification for machine learning methods. While modern machine learning methods can have a high prediction accuracy in a variety of problems, it is still challenging to properly quantify their uncertainty. There has been a recent surge of work developing methods for this problem. It is one of the fastest developing areas in contemporary statistics. This course will survey a variety of different problems and approaches, such as calibration, prediction intervals (and sets), conformal inference, OOD detection, etc. We will discuss both empirically successful/popular methods as well as theoretically justified ones. See below for a sample of papers.

In addition to the core topic, there may be a (brief) discussion of a few additional topics:

  1. Influential recent "breakthrough" papers applying machine learning (GPT-3, AlphaFold, etc), to get a sense of the "real" problems people want to solve.
  2. Important recent papers in statistical learning theory; to set the a sense of progress on the theoretical foundations of the area.

Part of the class will be based on student presentations of papers. We imagine a critical discussion of one or two papers per lecture; and several contiguous lectures on the same theme. The goal will be to develop a deep understanding of recent research.

See also the syllabus.

Influential recent ML papers

Why are people excited about ML?

Uncertainty quantification

Why do we need to quantify uncertainty? What are the main approaches?

Conformal prediction++

Tolerance Regions and Related Notions

Calibration

Types of uncertainty

Empirics

Bayesian approaches, ensembles

Baseline methods:

Other approaches:

Dataset shift

Lectures

Lecture 1-2: Introduction. By Edgar Dobriban.

Lecture 3-8: Conformal Prediction, Calibration. By Edgar Dobriban. Caveat: handwritten and may be hard to read. To be typed up in the future.

Lecture 9 onwards: student presentations.

Presentation 1: Deep Learning in Medical Imaging by Rongguang Wang.

Presentation 2: Introduction to Fairness in Machine Learning by Harry Wang.

Presentation 3: Conformal Prediction with Dependent Data by Kaifu Wang.

Presentation 4: Bayesian Calibration by Ryan Brill.

Presentation 5: Conditional Randomization Test by Abhinav Chakraborty.

Presentation 6: Distribution Free Prediction Sets and Regression by Anirban Chatterjee.

Presentation 7: Advanced Topics in Fairness by Alexander Tolbert.

Presentation 8: Calibration and Quantile Regression by Ignacio Hounie.

Presentation 9: Conformal Prediction under Distribution Shift by Patrick Chao and Jeffrey Zhang.

Presentation 10: Testing for Outliers with Conformal p-values by Donghwan Lee.

Presentation 11: Out-of-distribution detection and Likelihood Ratio Tests by Alex Nguyen-Le.

Presentation 12: Online Multicalibration and No-Regret Learning by Georgy Noarov.

Presentation 13: Online Asymptotic Calibration by Juan Elenter.

Presentation 14: Calibration in Modern ML by Soham Dan.

Presentation 15: Bayesian Optimization and Some of its Applications by Seong Han.

Presentation 16: Distribution-free Uncertainty Quantification Impossibility and Possibility I by Xinmeng Huang.

Presentation 17: Distribution-free Uncertainty Quantification Impossibility and Possibility II by Shuo Li.

Presentation 18: Top-label calibration and multiclass-to-binary reductions by Shiyun Xu.

Presentation 19: Ensembles for uncertainty quantification by Rahul Ramesh.

Presentation 20: Universal Inference by Behrad Moniri.

Presentation 21: Typicality and OOD detection by Eric Lei.

Presentation 22: Bayesian uncertainty quantification and dropout by Samar Hadou. (See lec 27 for an introduction).

Presentation 23: Distribution-Free Risk-Controlling Predictio Sets by Ramya Ramalingam.

Presentation 24: Task-Driven Detection_of Distribution Shifts by Charis Stamouli.

Presentation 25: Calibration: a transformation-based method and a connection with adversarial robustness by Sooyong Jang.

Presentation 26: A Theory of Universal Learning by Raghu Arghal.

Presentation 27: Deep Ensembles: An introduction by Xiayan Ji.

Presentation 28: Why are Convolutional Nets More Sample-efficient than Fully-Connected Nets? by Evangelos Chatzipantazis.

Presentation 29: E-values by Sam Rosenberg.

Other topics

OOD Detection

Classical statistical goals: confidence intervals, (single and multiple) hypothesis testing

Inductive biases

Reviews, applications, etc

Learning theory & training methods

Distributed learning

Other materials

Related educational materials

Recent workshops and tutorials on related topics

Seminar series

Software tools

Probability background

ML background

  • Penn courses CIS 520, ESE 546, STAT 991, and links therein

Perspectives