• Stars
    star
    396
  • Rank 108,801 (Top 3 %)
  • Language
    Python
  • License
    MIT License
  • Created over 3 years ago
  • Updated 5 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

APPS: Automated Programming Progress Standard (NeurIPS 2021)

Measuring Coding Challenge Competence With APPS

This is the repository for Measuring Coding Challenge Competence With APPS by Dan Hendrycks*, Steven Basart*, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, and Jacob Steinhardt.

Download the APPS dataset here. (~1.3GB)

This repository contains both training and evaluation code.

Fine-tuned GPT-2 1.5B and GPT-Neo 2.7B weights are available here.

For other benchmarks of enormous Transformers, see a dataset which tests ability in competition math, a dataset which tests knowledge of ethics, and a dataset spanning 50+ academic subjects.

How to Use

The training instructions are specified in train/README and similarly the evaluation instructions are specified in eval/README.

Hugging Face

The dataset is also available in Hugging Face datasets under apps.

Citation

If you find this useful in your research, please consider citing

@article{hendrycksapps2021,
  title={Measuring Coding Challenge Competence With APPS},
  author={Dan Hendrycks and Steven Basart and Saurav Kadavath and Mantas Mazeika and Akul Arora and Ethan Guo and Collin Burns and Samir Puranik and Horace He and Dawn Song and Jacob Steinhardt},
  journal={NeurIPS},
  year={2021}
}