• Stars
    star
    193
  • Rank 201,081 (Top 4 %)
  • Language
    Python
  • License
    MIT License
  • Created about 1 year ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A package for statistically rigorous scientific discovery using machine learning. Implements prediction-powered inference.

Prediction-powered inference (PPI) is a framework for statistically rigorous scientific discovery using machine learning. Given a small amount of data with gold-standard labels and a large amount of unlabeled data, prediction-powered inference allows for the estimation of population parameters, such as the mean outcome, median outcome, linear and logistic regression coefficients. Prediction-powered inference can be used both to produce better point estimates of these quantities as well as tighter confidence intervals and more powerful p-values. The methods work both in the i.i.d. setting and for certain classes of distribution shifts.

See the API documentation here and the original paper here.

This package is actively maintained, and contributions from the community are welcome.

Getting Started

In order to install the package, run

pip install ppi-python

This will build and install the most recent version of the package.

Warmup: estimating the mean

To test your installation, you can try running the prediction-powered mean estimation algorithm on the galaxies dataset. The gold-standard labels and model predictions from the dataset will be downloaded into a folder called ./data/. The labels, $Y$, are binary indicators of whether or not the galaxy is a spiral galaxy. The model predictions, $\hat{Y}$, are the model's estimated probability of whether the galaxy image has spiral arms. The inference target is $\theta^* = \mathbb{E}[Y]$, the fraction of spiral galaxies. You will produce a confidence interval, $\mathcal{C}^{\mathrm{PP}}_\alpha$, which contains $\theta^*$ with probability $1-\alpha=0.9$, i.e.,

$$\mathbb{P}\left( \theta^* \in \mathcal{C}^{\mathrm{PP}}_\alpha\right) \geq 0.9.$$

The code for this is below. It can be copy-pasted directly into the Python REPL.

# Imports
import numpy as np
from ppi_py import ppi_mean_ci
from ppi_py.datasets import load_dataset
np.random.seed(0) # For reproducibility's sake
# Download and load dataset
data = load_dataset('./data/', "galaxies")
Y_total = data["Y"]; Yhat_total = data["Yhat"]
# Set up the inference problem
alpha = 0.1 # Error rate
n = 1000 # Number of labeled data points
rand_idx = np.random.permutation(Y_total.shape[0])
Yhat = Yhat_total[rand_idx[:n]]
Y = Y_total[rand_idx[:n]]
Yhat_unlabeled = Yhat_total[n:]
# Produce the prediction-powered confidence interval
ppi_ci = ppi_mean_ci(Y, Yhat, Yhat_unlabeled, alpha=alpha)
# Print the results
print(f"theta={Y_total.mean():.3f}, CPP={ppi_ci}")

The expected results look as below $^*$:

theta=0.259, CPP=(0.2322466630315982, 0.2626038799812829)

($^*$ these results were produced with numpy=1.26.1, and may differ slightly due to randomness in other environments.)

If you have reached this stage, congratulations! You have constructed a prediction-powered confidence interval. See the documentation for more usages of prediction-powered inference.

Examples

The package somes with a suite of examples on real data:

Usage and Documentation

There is a common template that all PPI confidence intervals follow.

ppi_[ESTIMAND]_ci(X, Y, Yhat, X_unlabeled, Yhat_unlabeled, alpha=0.1)

You can replace [ESTIMAND] with the estimand of your choice. For certain estimands, not all the arguments are required, and in this case, they are omitted. For example, in the case of mean estimation, the function signature is:

ppi_mean_ci(Y, Yhat, Yhat_unlabeled, alpha=0.1)

All the prediction-powered point estimates and confidence intervals implemented so far can be imported by running from ppi_py import ppi_[ESTIMAND]_pointestimate, ppi_[ESTIMAND]_ci. For the case of the mean, one can also import the p-value as from ppi import ppi_mean_pval.

Full documentation is available here.

Repository structure

The repository is organized into three main folders:

  • ./ppi_py/
  • ./examples/
  • ./tests/

The first folder, ./ppi_py, contains all the code that eventually gets compiled into the ppi_py package. Most importantly, there is a file, ./ppi_py/ppi.py, which implements all the prediction-powered point estimates, confidence intervals, and p-values for different estimators. The file ./ppi_py/cross_ppi.py contains implementations of cross-prediction-powered inference, which allows for model training on the same data used for inference. There is also a file, ./ppi_py/baselines.py, which implements several baselines. Finally, the file ./ppi_py/datasets/datasets.py handles the loading of the sample datasets.

The folder ./examples contains notebooks for implementing prediction-powered inference on several datasets and estimands. These are listed above. There is also an additional subfolder, ./examples/baselines, which contains comparisons to certain baseline algorithms, as in the appendix of the original PPI paper.

The folder ./tests contains unit tests for each function implemented in the ppi_py package. The tests are organized by estimand, and can be run by executing pytest in the root directory. Some of the tests are stochastic, and therefore, have some failure probability, even if the functions are all implemented correctly. If a test fails, it may be worth running it again. Debugging the tests can be done by adding the -s flag and using print statements or pdb. Note that in order to be recognized by pytest, all tests must be preceded by test_.

The remainder of the files/folders are boilerplate and not relevant to most users.

Contributing

Thank you so much for considering making a contribution to ppi_py; we deeply value and appreciate it.

The contents of this repository will be pushed to PyPI whenever there are substantial revisions. If there are methods or examples within the PPI framework you'd like to see implemented, feel free to suggest them on the issues page. Community contributions are welcome and encouraged as pull requests directly onto the main branch. The main criteria for accepting such pull requests is:

  • The contribution should align with the repository's scope.
  • All new functionality should be tested for correctness within our existing pytest framework.
  • If the pull request involves a new PPI method, it should have a formal mathematical proof of validity which can be referenced.
  • If the pull request solves a bug, there should be a reproducible bug (within a specific environment) that is solved. Bug reports can be made on the issues page.
  • The contribution should be well documented.
  • The pull request should be of generally high quality, up to the review of the repository maintainers. The repository maintainers will approve pull requests at their discretion. Before working on one, it may be helpful to post a question on the issues page to verify if the contribution would be a good candidate for merging into the main branch.

Accepted pull requests will be run through an automated Black formatter, so contributors may want to run Black locally first.

Papers

The repository currently implements the methods developed in the following papers:

Prediction-Powered Inference

PPI++: Efficient Prediction-Powered Inference

Cross-Prediction-Powered Inference

More Repositories

1

conformal-prediction

Lightweight, useful implementation of conformal prediction on real data.
Jupyter Notebook
707
star
2

conformal_classification

Wrapper for a PyTorch classifier which allows it to output prediction sets. The sets are theoretically guaranteed to contain the true class with high probability (via conformal prediction).
Jupyter Notebook
218
star
3

rcps

Official codebase for "Distribution-Free, Risk-Controlling Prediction Sets"
Python
84
star
4

conformal-time-series

Conformal prediction for time-series applications.
Jupyter Notebook
81
star
5

prediction-powered-inference

A statistical toolkit for scientific discovery using machine learning
Jupyter Notebook
69
star
6

event_based_gaze_tracking

Dataset release for Event-Based, Near-Eye Gaze Tracking Beyond 10,000 Hz
Python
62
star
7

ltt

Learn then Test: Calibrating Predictive Algorithms to Achieve Risk Control
Jupyter Notebook
60
star
8

conformal-risk

Conformal prediction for controlling monotonic risk functions. Simple accompanying PyTorch code for conformal risk control in computer vision and natural language processing.
Python
55
star
9

im2im-uq

Image-to-image regression with uncertainty quantification in PyTorch. Take any dataset and train a model to regress images to images with rigorous, distribution-free uncertainty quantification.
Python
50
star
10

cfr-covid-19

Implementation of https://arxiv.org/abs/2003.08592
R
17
star
11

private_prediction_sets

Wrap around any model to output differentially private prediction sets with finite sample validity on any dataset.
Python
17
star
12

online-conformal-decaying

Jupyter Notebook
3
star
13

conformal-triage

Jupyter Notebook
2
star