There are no reviews yet. Be the first to send feedback to the community and the maintainers!
wilds
A machine learning benchmark of in-the-wild distribution shifts, with data loaders, evaluators, and default models.dsir
DSIR large-scale data selection framework for language model trainingjukemir
Perform transfer learning for MIR using Jukebox!verified_calibration
Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotlight).incontext-learning
Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implicit Bayesian Inference"gradual_domain_adaptation
swords
The Stanford Word Substitution (Swords) Benchmarkin-n-out
Code for the ICLR 2021 Paper "In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness"composed_finetuning
Code for the ICML 2021 paper "Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization" by Sang Michael Xie, Tengyu Ma, Percy LiangLove Open Source and this site? Check out how you can help us