• Stars
    star
    1,371
  • Rank 34,304 (Top 0.7 %)
  • Language
    Python
  • License
    MIT License
  • Created over 1 year ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

800,000 step-level correctness labels on LLM solutions to MATH problems

PRM800K: A Process Supervision Dataset

[Blog Post] [Paper]

This repository accompanies the paper Let's Verify Step by Step and presents the PRM800K dataset introduced there. PRM800K is a process supervision dataset containing 800,000 step-level correctness labels for model-generated solutions to problems from the MATH dataset. More information on PRM800K and the project can be found in the paper.

We are releasing the raw labels as well as the instructions we gave labelers during phase 1 and phase 2 of the project. Example labels can be seen in the image below.

Data

The data/ folder contains our labels formatted as newline-delimited lists of json data. The data has been uploaded with Git LFS, which you'll need to install in order to properly clone the repository.

Each line represents 1 full solution sample and can contain many step-level labels. Here is one annotated line:

{
  // UUID representing a particular labeler.
  "labeler": "340d89bc-f5b7-45e9-b272-909ba68ee363",

  // The timestamp this trajectory was submitted.
  "timestamp": "2023-01-22T04:34:27.052924",

  // In phase 2, we split our data collection into generations, using our best
  // PRM so far to pick which solutions to score in the next generation.
  // In phase 1, this value should always be null.
  "generation": 9,

  // In each generation, we reserve some solutions for quality control. We serve
  // these solutions to every labeler, and check that they agree with our
  // gold labels.
  "is_quality_control_question": false,

  // generation -1 was reserved for a set of 30 questions we served every
  // labeler in order to screen for base task performance.
  "is_initial_screening_question": false,

  // Metadata about the question this solution is a response to.
  "question": {
    // Text of the MATH problem being solved.
    "problem": "What is the greatest common factor of $20 !$ and $200,\\!000$?  (Reminder: If $n$ is a positive integer, then $n!$ stands for the product $1\\cdot 2\\cdot 3\\cdot \\cdots \\cdot (n-1)\\cdot n$.)",
    // Ground truth solution from the MATH dataset.
    "ground_truth_solution": "The prime factorization of $200,000$ is $2^6 \\cdot 5^5$. Then count the number of factors of $2$ and $5$ in $20!$. Since there are $10$ even numbers, there are more than $6$ factors of $2$. There are $4$ factors of $5$. So the greatest common factor is $2^6 \\cdot 5^4=\\boxed{40,\\!000}$.",
    // Ground truth answer.
    "ground_truth_answer": "40,\\!000",

    // The full steps of the model-generated solution. This is only set for
    // phase 2 where we pre-generated all solutions that we labeled.
    "pre_generated_steps": [
      "I want to find the largest positive integer that divides both $20 !$ and $200,\\!000$ evenly.",
      "One way to do this is to factor both numbers into prime factors and look for the common ones.",
      "I know that $200,\\!000 = 2^5\\cdot 10^4 = 2^9\\cdot 5^4$.",
      "To find the prime factorization of $20 !$, I can use the fact that it is the product of all the positive integers from $1$ to $20$.",
      "For each prime number $p$ between $1$ and $20$, I can count how many multiples of $p$ are in that range.",
      "For example, there are $10$ multiples of $2$ between $1$ and $20$, namely $2, 4, 6, \\dots, 20$.",
      "But there are also $5$ multiples of $4$, which is $2^2$, and $2$ multiples of $8$, which is $2^3$, and $1$ multiple of $16$, which is $2^4$.",
      "So, the total power of $2$ in $20 !$ is $10 + 5 + 2 + 1 = 18$.",
      "Similarly, there are $4$ multiples of $5$, namely $5, 10, 15, 20$, so the power of $5$ in $20 !$ is $4$.",
      "There are $6$ multiples of $3$, namely $3, 6, 9, \\dots, 18$, but there are also $2$ multiples of $9$, which is $3^2$, so the power of $3$ in $20 !$ is $6 + 2 = 8$.",
      "There are $2$ multiples of $7$, namely $7$ and $14$, so the power of $7$ in $20 !$ is $2$.",
      "There are $1$ multiple of each of the other prime numbers $11, 13, 17$, and $19$, so the powers of those primes in $20 !$ are $1$ each.",
      "Therefore, the prime factorization of $20 !$ is $2^{18}\\cdot 3^8\\cdot 5^4\\cdot 7^2\\cdot 11\\cdot 13\\cdot 17\\cdot 19$.",
      "To find the greatest common factor of $20 !$ and $200,\\!000$, I need to take the lowest power of each common prime factor.",
      "The only common prime factors are $2$ and $5$, and the lowest powers are $9$ and $4$, respectively.",
      "So, the greatest common factor is $2^9\\cdot 5^4 = 512\\cdot 625 = 320,\\!000$.\n\n# Answer\n\n320,000"
    ],
    // The answer given as the end of the pre-generated solution. We can see
    // this solution is incorrect.
    "pre_generated_answer": "320,000",
    // The score given by our PRM to this solution. This one isn't rated very
    // highly!
    "pre_generated_verifier_score": 0.010779580529581414
  },

  // The human data we collected for this solution, containing correctness
  // labels for each step of the solution.
  "label": {
    "steps": [
      // Each object here represents labels for one step of the solution.
      {
        // Each step will contain one or more completions. These are candidate
        // steps the model output at this step of the trajectory. In phase 1,
        // we frequently collect labels on alternative steps, while in phase 2
        // we only collect labels on alternative steps after the first mistake,
        // so most completions lists are singletons.
        "completions": [
          {
            // Text of the step.
            "text": "I want to find the largest positive integer that divides both $20 !$ and $200,\\!000$ evenly.",
            // The rating the labeler gave to this step. Can be -1, 0, or +1.
            // This is a 0 because it isn't incorrect, but it does not make
            // any progress.
            "rating": 0,
            // The labeler can flag steps that they don't know how to label.
            // This is rarely used.
            "flagged": null
          }
        ],
        // In phase 1, if all completions were rated -1, we allowed labelers to
        // write their own +1 step. This is null for all steps in phase 2.
        "human_completion": null,
        // The index of the completion "chosen" at this step, or null if the
        // human_completion was used. You can reconstruct the solution
        // trajectory like:
        // [
        //     step["human_completion"] if step["chosen_completion"] is None
        //     else step["completions"][step["chosen_completion"]]["text"]
        //     for step in labeled_solution["label"]["steps"]
        // ]
        "chosen_completion": 0
      },
      {
        "completions": [
          {
            "text": "One way to do this is to factor both numbers into prime factors and look for the common ones.",
            "rating": 0,
            "flagged": null
          }
        ],
        "human_completion": null,
        "chosen_completion": 0
      },
      {
        // Some steps contain multiple alternative completions, and each one
        // gets a rating.
        "completions": [
          {
            "text": "I know that $200,\\!000 = 2^5\\cdot 10^4 = 2^9\\cdot 5^4$.",
            "rating": -1,
            "flagged": null
          },
          {
            "text": "To factor $20 !$, I can use the fact that every factorial is a multiple of every number less than or equal to it.",
            "rating": 0,
            "flagged": false
          },
          {
            "text": "I can use a factor tree to find the prime factors of $200,\\!000$: $200,\\!000 = 2^5\\cdot 10^4 = 2^5\\cdot 2^4\\cdot 5^4 = 2^9\\cdot 5^4$.",
            "rating": -1,
            "flagged": false
          },
          {
            "text": "I can use a factor tree to find the prime factors of $200,\\!000$.",
            "rating": 0,
            "flagged": false
          },
          {
            "text": "To factor $20 !$, I can use the fact that any factorial is divisible by all the primes less than or equal to the input.",
            "rating": 0,
            "flagged": false
          }
        ],
        "human_completion": null,
        "chosen_completion": null
      }
    ],
    // Total time in milliseconds spent on labeling this solution.
    "total_time": 278270,
    // Final result of labeling this solution. Will be one of:
    //   - "found_error": In phase 2 we stop labeling a solution after the
    //                    first error is found.
    //   - "solution": We reached a step that concluded in the correct answer
    //                 to the problem.
    //   - "bad_problem": The labeler reported the problem as broken.
    //   - "give_up": The labeler was stuck (the problem was taking too long,
    //                or the instructions were unclear) and moved onto the
    //                next problem.
    "finish_reason": "found_error"
  }
}

Instructions

The instructions/ folder contains the instructions documents we gave to labelers during each phase of the project.

Answer Grading

The grading/ folder contains the python grading logic we used for determining if a model-outputted answer correctly matched the ground truth answer in Hendrycks' MATH dataset. We build off of Hendrycks' math normalization logic in math_normalize.py and use sympy to check for equality of expressions in grader.py. We recommend using grader.grade_answer(model_answer, gt_answer) where both answers are strings to determine if a solution is correct or not.

Answer grading is difficult in general. This grading logic is designed to be conservative and will sometimes reject correct answers, though it does so less frequently than the normalization logic from MATH. Our logic might sometimes admit incorrect answers, though we've put effort into minimizing this.

MATH Splits

As explained in Let's Verify Step by Step, we use a nonstandard MATH train/test split.

In order to avoid the risk of over-fitting on the 7,500 MATH training problems, we expanded the training set to include 4,500 MATH test split problems. We therefore evaluate our models only on the remaining 500 held-out problems. We selected these 500 test problems uniformly at random, and we believe they are representative of the test set as a whole.

The math_splits/ folder contains our selected splits in the train.jsonl and test.jsonl files. You'll need Git LFS to properly clone these files.

Scored Samples

We release all large-scale model samples used to evaluate the large-scale ORM and PRM, corresponding to Figure 3 in the paper. Each test problem has to 1860 scored samples. Solutions that failed to reach an answer within 1024 tokens were discarded, resulting in less than 1860 samples on some problems. We account for this in the best-of-N evaluation logic.

Evaluate the PRM:

python eval/eval.py --method prm

Evaluate the ORM:

python eval/eval.py --method orm

Citation

Please use the below BibTeX entry to cite this dataset:

@article{lightman2023lets,
      title={Let's Verify Step by Step}, 
      author={Lightman, Hunter and Kosaraju, Vineet and Burda, Yura and Edwards, Harri and Baker, Bowen and Lee, Teddy and Leike, Jan and Schulman, John and Sutskever, Ilya and Cobbe, Karl},
      journal={arXiv preprint arXiv:2305.20050},
      year={2023}
}

More Repositories

1

whisper

Robust Speech Recognition via Large-Scale Weak Supervision
Python
62,693
star
2

openai-cookbook

Examples and guides for using the OpenAI API
MDX
58,610
star
3

gym

A toolkit for developing and comparing reinforcement learning algorithms.
Python
34,442
star
4

CLIP

CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
Jupyter Notebook
22,966
star
5

openai-python

The official Python library for the OpenAI API
Python
22,561
star
6

gpt-2

Code for the paper "Language Models are Unsupervised Multitask Learners"
Python
21,450
star
7

chatgpt-retrieval-plugin

The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
Python
21,032
star
8

baselines

OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
Python
15,622
star
9

gpt-3

GPT-3: Language Models are Few-Shot Learners
15,573
star
10

swarm

Educational framework exploring ergonomic, lightweight multi-agent orchestration. Managed by OpenAI Solution team.
Python
14,944
star
11

evals

Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Python
14,607
star
12

tiktoken

tiktoken is a fast BPE tokeniser for use with OpenAI's models.
Python
11,374
star
13

triton

Development repository for the Triton language and compiler
C++
11,077
star
14

DALL-E

PyTorch package for the discrete VAE used for DALL·E.
Python
10,760
star
15

shap-e

Generate 3D objects conditioned on text or images
Python
10,285
star
16

spinningup

An educational resource to help anyone learn deep reinforcement learning.
Python
8,587
star
17

openai-node

The official Node.js / Typescript library for the OpenAI API
TypeScript
7,703
star
18

universe

Universe: a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications.
Python
7,385
star
19

jukebox

Code for the paper "Jukebox: A Generative Model for Music"
Python
7,326
star
20

point-e

Point cloud diffusion for 3D model synthesis
Python
5,777
star
21

consistency_models

Official repo for consistency models.
Python
5,725
star
22

guided-diffusion

Python
5,000
star
23

plugins-quickstart

Get a ChatGPT plugin up and running in under 5 minutes!
Python
4,133
star
24

transformer-debugger

Python
4,003
star
25

retro

Retro Games in Gym
C
3,361
star
26

glide-text2im

GLIDE: a diffusion-based text-conditional image synthesis model
Python
3,277
star
27

glow

Code for reproducing results in "Glow: Generative Flow with Invertible 1x1 Convolutions"
Python
3,016
star
28

mujoco-py

MuJoCo is a physics engine for detailed, efficient rigid body simulations with contacts. mujoco-py allows using MuJoCo from Python 3.
Cython
2,586
star
29

openai-quickstart-node

Node.js example app from the OpenAI API quickstart tutorial
JavaScript
2,534
star
30

weak-to-strong

Python
2,445
star
31

improved-gan

Code for the paper "Improved Techniques for Training GANs"
Python
2,218
star
32

human-eval

Code for the paper "Evaluating Large Language Models Trained on Code"
Python
2,204
star
33

improved-diffusion

Release for Improved Denoising Diffusion Probabilistic Models
Python
2,102
star
34

roboschool

DEPRECATED: Open-source software for robot simulation, integrated with OpenAI Gym.
Python
2,064
star
35

image-gpt

Python
2,025
star
36

consistencydecoder

Consistency Distilled Diff VAE
Python
1,933
star
37

finetune-transformer-lm

Code and model for the paper "Improving Language Understanding by Generative Pre-Training"
Python
1,929
star
38

gpt-2-output-dataset

Dataset of GPT-2 outputs for research in detection, biases, and more
Python
1,908
star
39

multiagent-particle-envs

Code for a multi-agent particle environment used in the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
Python
1,871
star
40

pixel-cnn

Code for the paper "PixelCNN++: A PixelCNN Implementation with Discretized Logistic Mixture Likelihood and Other Modifications"
Python
1,856
star
41

openai-quickstart-python

Python example app from the OpenAI API quickstart tutorial
1,685
star
42

requests-for-research

A living collection of deep learning problems
HTML
1,625
star
43

multi-agent-emergence-environments

Environment generation code for the paper "Emergent Tool Use From Multi-Agent Autocurricula"
Python
1,590
star
44

gpt-discord-bot

Example Discord bot written in Python that uses the completions API to have conversations with the `text-davinci-003` model, and the moderations API to filter the messages.
Python
1,569
star
45

evolution-strategies-starter

Code for the paper "Evolution Strategies as a Scalable Alternative to Reinforcement Learning"
Python
1,504
star
46

generating-reviews-discovering-sentiment

Code for "Learning to Generate Reviews and Discovering Sentiment"
Python
1,491
star
47

neural-mmo

Code for the paper "Neural MMO: A Massively Multiagent Game Environment for Training and Evaluating Intelligent Agents"
Python
1,463
star
48

openai-dotnet

The official .NET library for the OpenAI API
C#
1,352
star
49

openai-assistants-quickstart

OpenAI Assistants API quickstart with Next.js.
TypeScript
1,350
star
50

sparse_attention

Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"
Python
1,347
star
51

maddpg

Code for the MADDPG algorithm from the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
Python
1,284
star
52

Video-Pre-Training

Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos
Python
1,280
star
53

openai-openapi

OpenAPI specification for the OpenAI API
1,235
star
54

lm-human-preferences

Code for the paper Fine-Tuning Language Models from Human Preferences
Python
1,185
star
55

following-instructions-human-feedback

1,129
star
56

universe-starter-agent

A starter agent that can solve a number of universe environments.
Python
1,086
star
57

dalle-2-preview

1,044
star
58

InfoGAN

Code for reproducing key results in the paper "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets"
Python
1,029
star
59

grade-school-math

Python
1,005
star
60

procgen

Procgen Benchmark: Procedurally-Generated Game-Like Gym-Environments
C++
1,005
star
61

supervised-reptile

Code for the paper "On First-Order Meta-Learning Algorithms"
JavaScript
955
star
62

blocksparse

Efficient GPU kernels for block-sparse matrix multiplication and convolution
Cuda
941
star
63

automated-interpretability

Python
896
star
64

random-network-distillation

Code for the paper "Exploration by Random Network Distillation"
Python
861
star
65

kubernetes-ec2-autoscaler

A batch-optimized scaling manager for Kubernetes
Python
849
star
66

summarize-from-feedback

Code for "Learning to summarize from human feedback"
Python
833
star
67

large-scale-curiosity

Code for the paper "Large-Scale Study of Curiosity-Driven Learning"
Python
800
star
68

multiagent-competition

Code for the paper "Emergent Complexity via Multi-agent Competition"
Python
761
star
69

imitation

Code for the paper "Generative Adversarial Imitation Learning"
Python
643
star
70

deeptype

Code for the paper "DeepType: Multilingual Entity Linking by Neural Type System Evolution"
Python
633
star
71

mlsh

Code for the paper "Meta-Learning Shared Hierarchies"
Python
588
star
72

iaf

Code for reproducing key results in the paper "Improving Variational Inference with Inverse Autoregressive Flow"
Python
499
star
73

mujoco-worldgen

Automatic object XML generation for Mujoco
Python
489
star
74

safety-gym

Tools for accelerating safe exploration research.
Python
421
star
75

vdvae

Repository for the paper "Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images"
Python
407
star
76

coinrun

Code for the paper "Quantifying Transfer in Reinforcement Learning"
C++
390
star
77

robogym

Robotics Gym Environments
Python
389
star
78

weightnorm

Example code for Weight Normalization, from "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks"
Python
357
star
79

atari-py

A packaged and slightly-modified version of https://github.com/bbitmaster/ale_python_interface
C++
354
star
80

openai-security-bots

Python
351
star
81

openai-gemm

Open single and half precision gemm implementations
C
335
star
82

vime

Code for the paper "Curiosity-driven Exploration in Deep Reinforcement Learning via Bayesian Neural Networks"
Python
331
star
83

safety-starter-agents

Basic constrained RL agents used in experiments for the "Benchmarking Safe Exploration in Deep Reinforcement Learning" paper.
Python
312
star
84

ebm_code_release

Code for Implicit Generation and Generalization with Energy Based Models
Python
311
star
85

CLIP-featurevis

code for reproducing some of the diagrams in the paper "Multimodal Neurons in Artificial Neural Networks"
Python
294
star
86

gym-http-api

API to access OpenAI Gym from other languages via HTTP
Python
292
star
87

gym-soccer

Python
289
star
88

sparse_autoencoder

Python
287
star
89

robosumo

Code for the paper "Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments"
Python
283
star
90

web-crawl-q-and-a-example

Learn how to crawl your website and build a Q/A bot with the OpenAI API
Jupyter Notebook
268
star
91

phasic-policy-gradient

Code for the paper "Phasic Policy Gradient"
Python
245
star
92

EPG

Code for the paper "Evolved Policy Gradients"
Python
240
star
93

orrb

Code for the paper "OpenAI Remote Rendering Backend"
C#
235
star
94

miniF2F

Formal to Formal Mathematics Benchmark
Objective-C++
202
star
95

atari-reset

Code for the blog post "Learning Montezuma’s Revenge from a Single Demonstration"
Python
183
star
96

spinningup-workshop

For educational materials related to the spinning up workshops.
TeX
181
star
97

train-procgen

Code for the paper "Leveraging Procedural Generation to Benchmark Reinforcement Learning"
Python
170
star
98

human-eval-infilling

Code for the paper "Efficient Training of Language Models to Fill in the Middle"
Python
162
star
99

openai-go

The official Go library for the OpenAI API
Go
145
star
100

dallify-discord-bot

Example code for using OpenAI’s NodeJS SDK with discord.js SDK to create a Discord Bot that uses Slash Commands.
TypeScript
139
star