• Stars
    star
    465
  • Rank 90,704 (Top 2 %)
  • Language
    Python
  • License
    BSD 3-Clause "New...
  • Created almost 2 years ago
  • Updated 7 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

This is the official code for the paper CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning (NeurIPS22).

CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning

This is the official code for the paper CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning (accepted to NeurIPS 2022). Do check out our blog and poster.

Authors: Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, Steven C.H. Hoi

Contents:

CodeRL Overview


An example program synthesis task (Right): Each task includes a problem specification in natural language, which often contains example input and output pairs. The expected output is a program that is checked for functional correctness against some unit tests. A high-level overview of our CodeRL framework for program synthesis (Left): Our CodeRL framework treats pretrained language model (LM) as a stochastic policy, token predictions as actions, and rewards can be estimated based on unit test results of output programs

  • During training, we treat the code-generating language models as an actor network, and introduce a critic network that is trained to predict the functional correctness of generated programs and provide dense feedback signals to the actor.
  • During inference, we introduce a new generation procedure with a critical sampling strategy that allows a model to automatically regenerate programs based on feedback from example unit tests and critic scores.

Installation

The code requires some dependencies as specified in requirements.txt. Please follow the relevant libraries to install or run:

pip install -r requirements.txt

Install the transformers library from the source code (the current source code is developed from the original code of version 4.16.1):

cd transformers
pip install -e .

Datasets

For pretraining, apart from the CodeSearchNet (CSN), we use the Python Github Code Dataset (GCPY). We have compiled public, non-personal data from GitHub consisting of permissively licensed Python code (e.g. “mit”, “apache-2”, “bsd-3-clause”, “bsd-2- 126 clause”, “cc0-1.0”, “unlicense”, “isc”). Please see the paper for more details on pretraining data preprocessing and pretraining.

After pretraining, we finetune/evaluate models on the following major program synthesis benchmarks:

  • APPS: Please follow the downloading and preprocessing instructions provided here.
  • MBPP: The dataset is available here.

On both benchmarks, we follow the same way of preprocessing data and constructing input/output sequences as the original benchmark papers.

Download and unzip all files into the data folder.

Example Unit Tests

In addition to the original hidden unit tests on APPS, we also utilize the example tests that are often embedded in problem descriptions. After downloading and unzipping APPS, you can run the notebook extract_example_test.ipynb to extract and save example unit tests of APPS test samples into corresponding sample folder e.g. data/APPS/test/0000/. We release the example unit tests that we already extracted using this notebook in the folder data/APPS_test_example_tests/. The average number of example unit tests per sample is 1.9764.

Models

We employ CodeT5 (a family of encoder-decoder language models for code from the paper) as the foundation model in our work.

We pretrained CodeT5 with bigger dataset and improved learning objectives. We release two large-sized CodeT5 checkpoints at Hugging Face: Salesforce/codet5-large and Salesforce/codet5-large-ntp-py.

  • CodeT5-large: a 770M-CodeT5 model which was pretrained using Masked Span Prediction objective on CSN and achieved new SOTA results on several CodeXGLUE benchmarks. See Appendix A.1 of the paper for more details.
  • CodeT5-large-ntp-py: A 770M-CodeT5 model which was first pretrained using Masked Span Prediction objective on CSN and GCPY, followed by using Next Token Prediction objective on GCPY. This checkpoint was especially optimized for Python code generation tasks and employed by CodeRL.

For finetuning on downstream code generation tasks on APPS, we adopted critic models for RL training. We released the following critic model checkpoints (on Google Cloud Storage):

  • CodeT5-finetuned_critic: a CodeT5 model which is initialized from a normal CodeT5-base and trained as a classifier to predict unit test outcomes (one of Compile Error, Runtime Error, Failed Tests, and Passed Tests). The critic is used to estimate returns and facilitate RL finetuning.
  • CodeT5-finetuned_critic_binary: similar to the prior model but was trained with binary annotations (Passed Tests or not Passed Tests only). This critic is used to facilitate generation procedures during inference.

We released the following finetuned code generation model checkpoints (on Google Cloud Storage):

  • CodeT5-finetuned_CodeRL: a CodeT5 model which was initialized from the prior pretrained CodeT5-large-ntp-py and then finetuned on APPS following our CodeRL training framework.

Download all files into the models folder.

Processes

Generating Programs

We created scripts/generate.sh to generate programs on the APPS benchmark. You can directly run this file by configuring the following parameters:

Parameters Description Example Values
model_path Path to a trained CodeT5-style model models/codet5_finetuned_codeRL
tokenizer_path Path to the saved tokenizer for CodeT5 (or path to cache the tokenizer) models/codet5_tokenizer/
test_path Path to the original test samples data/APPS/test/
start start index of test samples to be generated 0
end end index of test samples to be generated 5000
num_seqs number of total output programs to be generated (for sampling generation) 1000
num_seqs_per_iter Depending on the limit of GPU, we can generate multiple rounds, each with this number of output programs 50
temp temperature for sampling generation 0.6
output_path Path to save generated programs outputs/codes/

Other parameters are defined in the file utils/generate_configs.py.

Running the generation script will output programs, each of which is saved into a json file, including data fields code (list of output programs) and prompt (constructed input sequence to the LM model).

Running Unit Tests

Once the programs are generated, they are evaluated against the corresponding unseen unit tests in each problem.

To execute the unit tests and obtain test outcomes, we adapt our code to the official implementation of the APPS benchmark.

We created scripts/run_unit_tests.sh to run unit tests on generated programs on the APPS benchmark. You can directly run this file by configuring the following parameters:

Parameters Description Example Values
code_path Path to the generated programs to be evaluated outputs/codes/
output_path Path to the save unit test results outputs/test_results/
test_path Path to the original test samples data/APPS/test/
example_tests Whether to evaluate the programs on example unit tests (for filtering, refining programs) or hidden unit tests (for final evaluation) 0: use hidden unit tests; 1: use example unit tests
start start index of test samples to be evaluated 0
end end index of test samples to be evaluated 5000
threads Depending on the capacity of the computation resource to run unit tests, we can run unit tests on multiple test samples over multiple threads to speed up the execution time 30

Running the script will output test results for each program. For each test sample, the results are saved into a pickle file, including data fields results (list of test outcomes, one of -2 = compile error, -1 = runtime error, False = failed test case, True = passed test case), errors (real compile error trace with details like error type and line numbers), and sols (corresponding programs being evaluated).

Compared to the original implementation from APPS, we adopt one trick which will exit the unit testing loop if a program does not pass any test case. This will speed up the testing process while the final passing rate measures are not affected. Refer to the run_test function in utils/testing_utils.py for more details.

Evaluating Programs

To compute the pass@k metrics, rather than using the APPS evaluation metrics, we follow the official implementation of the HumanEval benchmark (which better measures pass@k normalized by the number of possible k programs)

Training Critic

We can train a critic model as a classifier that predicts the test outcomes of generated samples. For each training sample, we can follow the prior processes (generating programs and running unit tests) to obtain synthetic samples and their annotations of unit test outcomes. On average, we generate 20 programs per training sample (we provided some example generated programs in data/APPS/train/).

Once the programs are tested, we can used their test outcomes as annotations to train a critic model initialized from a LM pretrained on source code data (we used CodeT5-based in this case).

We created scripts/train_critic.sh and scripts/train_critic_deepspeed.sh to train a critic using generated programs. You can directly run this file by configuring the following parameters:

Parameters Description Example Values
batch-size-per-replica Number of training samples per GPU device 8
grad-acc-steps Gradient accumulation steps 1
epochs Number of training epochs 10
lr Learning rate 2e-5
save-freq Save model checkpoints after this number of training steps 1000
log-freq Save model training losses after this number of training steps 10
save_total_limit Total number of checkpoints to keep eventually (only the latest ones are kept) 5
fp16 Enable this to training model in 16-bit mode to reduce memory usage N/A
deepspeed If using deepspeed, set this parameter to the configuration file for deepspeed training configs/deepspeed_configs.json
db Enable this to train in debugging mode i.e. with small dummy data split and only 1 data worker N/A

Other parameters are defined in the file utils/train_configs.py.

Running the script will train a critic model as a classifier that receives inputs as a problem description + a generated program and returns an output as one of 4 test outcomes: compile error, runtime error, failed tests, and passed tests. The model checkpoints are saved in a folder under exps/.

Generating Critic Scores

We created scripts/generate_critic_scores.sh to generate critic scores for synthetic programs. We use the same parameters as defined in the generating program process with the following additional parameters:

Parameters Description Example Values
critic_scores Enable this to run inference on critic models and obtain critic scores N/A
gt_solutions Enable this to run inference on ground-truth programs; else, synthetic programs are used by default N/A
binary_prediction Enable this to predict in binary classification i.e. passed tests or failed tests only N/A

Other parameters are defined in the file utils/generate_configs.py.

Running the generation script will output predictions of the critic model. For each data sample, the prediction is saved into a pkl (pickle) file, including data fields code (list of programs), prompt (constructed input sequence to the critic model), gt_error_type (ground-truth test outcomes), pred_error_type (predicted test outcomes by critic), error_hidden_states (hidden states returned by critic).

Finetuning with Ground-truth Programs

We can finetune any pretraind language model as a program synthesis model that can generate code from problem description in natural language. In our approach, this stage of finetuning is a warmup stage using the ground-truth annotations (from APPS) before a further finetuning stage on synthetic/generated programs.

We created scripts/train_actor.sh and scripts/train_actor_deepspeed.sh which include the parameters as defined above in the critic training process.

Running the script will finetune a pretrained CodeT5-large model that receives a problem description as input and returns a corresponding solution program in Python. The model checkpoints are saved in a folder under exps/.

Finetuning with Generated Programs

We created scripts/train_actor_rl.sh and scripts/train_actor_rl_deepspeed.sh to train pretrained LMs with synthetic generated programs. We use the parameters as defined above in the critic training process with the following additional parameters:

Parameters Description Example Values
model_path Path to a finetuned model checkpoint e.g. from warm-up training models/codet5_finetuned_codeRL
relative_returns Enable this to consider a baseline to compute relative return estimates rather than absolute return restimates in the RL loss N/A

Other parameters are defined in the file utils/train_configs.py.

Running the script will load a finetuned CodeT5-large model and continue to train it with both generated programs as well as ground-truth programs in alternative training steps. The model checkpoints are saved in a folder under exps/.

Generating Programs with Critic Sampling

We will release the implementation details of our critic sampling procedure.

Example Generated Programs

The problem is from the APPS benchmark, and the solution programs are generated by CodeT5 and CodeRL.

Citation

If you find the paper or the source code useful to your projects, please cite the following bibtex:

@inproceedings{
	le2022coderl,
	title={Code{RL}: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning},
	author={Hung Le and Yue Wang and Akhilesh Deepak Gotmare and Silvio Savarese and Steven Hoi},
	booktitle={Advances in Neural Information Processing Systems},
	editor={Alice H. Oh and Alekh Agarwal and Danielle Belgrave and Kyunghyun Cho},
	year={2022},
	url={https://openreview.net/forum?id=WaGvb7OzySA}
}

License

The code is released under BSD 3-Clause - see LICENSE.txt for details.

This code is developed from other open source projects: including APPS, HumanEval, and transformers. We thank the original contributors of these works for open-sourcing their valuable source codes.

More Repositories

1

LAVIS

LAVIS - A One-stop Library for Language-Vision Intelligence
Jupyter Notebook
8,226
star
2

CodeGen

CodeGen is a family of open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.
Python
4,594
star
3

BLIP

PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Jupyter Notebook
3,879
star
4

akita

🚀 State Management Tailored-Made for JS Applications
TypeScript
3,442
star
5

Merlion

Merlion: A Machine Learning Framework for Time Series Intelligence
Python
3,232
star
6

ja3

JA3 is a standard for creating SSL client fingerprints in an easy to produce and shareable way.
Python
2,502
star
7

CodeT5

Home of CodeT5: Open Code LLMs for Code Understanding and Generation
Python
2,437
star
8

decaNLP

The Natural Language Decathlon: A Multitask Challenge for NLP
Python
2,301
star
9

TransmogrifAI

TransmogrifAI (pronounced trăns-mŏgˈrə-fī) is an AutoML library for building modular, reusable, strongly typed machine learning workflows on Apache Spark with minimal hand-tuning
Scala
2,227
star
10

policy_sentry

IAM Least Privilege Policy Generator
Python
1,938
star
11

awd-lstm-lm

LSTM and QRNN Language Model Toolkit for PyTorch
Python
1,900
star
12

cloudsplaining

Cloudsplaining is an AWS IAM Security Assessment tool that identifies violations of least privilege and generates a risk-prioritized report.
JavaScript
1,865
star
13

ctrl

Conditional Transformer Language Model for Controllable Generation
Python
1,766
star
14

lwc

⚡️ LWC - A Blazing Fast, Enterprise-Grade Web Components Foundation
JavaScript
1,537
star
15

WikiSQL

A large annotated semantic parsing corpus for developing natural language interfaces.
HTML
1,520
star
16

sloop

Kubernetes History Visualization
Go
1,396
star
17

CodeTF

CodeTF: One-stop Transformer Library for State-of-the-art Code LLM
Python
1,375
star
18

ALBEF

Code for ALBEF: a new vision-language pre-training method
Python
1,276
star
19

pytorch-qrnn

PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM
Python
1,255
star
20

ai-economist

Foundation is a flexible, modular, and composable framework to model socio-economic behaviors and dynamics with both agents and governments. This framework can be used in conjunction with reinforcement learning to learn optimal economic policies, as done by the AI Economist (https://www.einstein.ai/the-ai-economist).
Python
964
star
21

jarm

Python
914
star
22

design-system-react

Salesforce Lightning Design System for React
JavaScript
896
star
23

tough-cookie

RFC6265 Cookies and CookieJar for Node.js
TypeScript
858
star
24

reactive-grpc

Reactive stubs for gRPC
Java
814
star
25

OmniXAI

OmniXAI: A Library for eXplainable AI
Jupyter Notebook
782
star
26

xgen

Salesforce open-source LLMs with 8k sequence length.
Python
704
star
27

vulnreport

Open-source pentesting management and automation platform by Salesforce Product Security
HTML
593
star
28

UniControl

Unified Controllable Visual Generation Model
Python
577
star
29

hassh

HASSH is a network fingerprinting standard which can be used to identify specific Client and Server SSH implementations. The fingerprints can be easily stored, searched and shared in the form of a small MD5 fingerprint.
Python
525
star
30

progen

Official release of the ProGen models
Python
518
star
31

Argus

Time series monitoring and alerting platform.
Java
501
star
32

base-components-recipes

A collection of base component recipes for Lightning Web Components on Salesforce Platform
JavaScript
496
star
33

matchbox

Write PyTorch code at the level of individual examples, then run it efficiently on minibatches.
Python
488
star
34

PCL

PyTorch code for "Prototypical Contrastive Learning of Unsupervised Representations"
Python
483
star
35

cove

Python
470
star
36

DialogStudio

DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection and Instruction-Aware Models for Conversational AI
Python
431
star
37

warp-drive

Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning Framework on a GPU (JMLR 2022)
Python
429
star
38

observable-membrane

A Javascript Membrane implementation using Proxies to observe mutation on an object graph
TypeScript
368
star
39

PyRCA

PyRCA: A Python Machine Learning Library for Root Cause Analysis
Python
367
star
40

DeepTime

PyTorch code for Learning Deep Time-index Models for Time Series Forecasting (ICML 2023)
Python
322
star
41

ULIP

Python
316
star
42

logai

LogAI - An open-source library for log analytics and intelligence
Python
298
star
43

MultiHopKG

Multi-hop knowledge graph reasoning learned via policy gradient with reward shaping and action dropout
Jupyter Notebook
290
star
44

CodeGen2

CodeGen2 models for program synthesis
Python
269
star
45

provis

Official code repository of "BERTology Meets Biology: Interpreting Attention in Protein Language Models."
Python
269
star
46

jaxformer

Minimal library to train LLMs on TPU in JAX with pjit().
Python
255
star
47

EDICT

Jupyter Notebook
247
star
48

causalai

Salesforce CausalAI Library: A Fast and Scalable framework for Causal Analysis of Time Series and Tabular Data
Jupyter Notebook
223
star
49

ETSformer

PyTorch code for ETSformer: Exponential Smoothing Transformers for Time-series Forecasting
Python
221
star
50

themify

👨‍🎨 CSS Themes Made Easy. A robust, opinionated solution to manage themes in your web application
TypeScript
216
star
51

rules_spring

Bazel rule for building Spring Boot apps as a deployable jar
Starlark
215
star
52

simpletod

Official repository for "SimpleTOD: A Simple Language Model for Task-Oriented Dialogue"
Python
212
star
53

TabularSemanticParsing

Translating natural language questions to a structured query language
Jupyter Notebook
210
star
54

grpc-java-contrib

Useful extensions for the grpc-java library
Java
208
star
55

GeDi

GeDi: Generative Discriminator Guided Sequence Generation
Python
207
star
56

aws-allowlister

Automatically compile an AWS Service Control Policy that ONLY allows AWS services that are compliant with your preferred compliance frameworks.
Python
207
star
57

mirus

Mirus is a cross data-center data replication tool for Apache Kafka
Java
200
star
58

generic-sidecar-injector

A generic framework for injecting sidecars and related configuration in Kubernetes using Mutating Webhook Admission Controllers
Go
200
star
59

CoST

PyTorch code for CoST: Contrastive Learning of Disentangled Seasonal-Trend Representations for Time Series Forecasting (ICLR 2022)
Python
196
star
60

factCC

Resources for the "Evaluating the Factual Consistency of Abstractive Text Summarization" paper
Python
192
star
61

runway-browser

Interactive visualization framework for Runway models of distributed systems
JavaScript
188
star
62

glad

Global-Locally Self-Attentive Dialogue State Tracker
Python
186
star
63

ALPRO

Align and Prompt: Video-and-Language Pre-training with Entity Prompts
Python
177
star
64

densecap

Jupyter Notebook
176
star
65

cloud-guardrails

Rapidly apply hundreds of security controls in Azure
HCL
174
star
66

booksum

Python
167
star
67

kafka-junit

This library wraps Kafka's embedded test cluster, allowing you to more easily create and run integration tests using JUnit against a "real" kafka server running within the context of your tests. No need to stand up an external kafka cluster!
Java
166
star
68

sfdx-lwc-jest

Run Jest against LWC components in SFDX workspace environment
JavaScript
156
star
69

ctrl-sum

Resources for the "CTRLsum: Towards Generic Controllable Text Summarization" paper
Python
144
star
70

cos-e

Commonsense Explanations Dataset and Code
Python
143
star
71

hierarchicalContrastiveLearning

Python
140
star
72

secure-filters

Anti-XSS Security Filters for EJS and More
JavaScript
138
star
73

metabadger

Prevent SSRF attacks on AWS EC2 via automated upgrades to the more secure Instance Metadata Service v2 (IMDSv2).
Python
129
star
74

dockerfile-image-update

A tool that helps you get security patches for Docker images into production as quickly as possible without breaking things
Java
127
star
75

Converse

Python
125
star
76

refocus

The Go-To Platform for Visualizing Service Health
JavaScript
125
star
77

CoMatch

Code for CoMatch: Semi-supervised Learning with Contrastive Graph Regularization
Python
117
star
78

BOLAA

Python
114
star
79

bazel-eclipse

This repo holds two IDE projects. One is the Eclipse Feature for developing Bazel projects in Eclipse. The Bazel Eclipse Feature supports importing, building, and testing Java projects that are built using the Bazel build system. The other is the Bazel Java Language Server, which is a build integration for IDEs such as VS Code.
Java
108
star
80

botsim

BotSIM - a data-efficient end-to-end Bot SIMulation toolkit for evaluation, diagnosis, and improvement of commercial chatbots
Jupyter Notebook
108
star
81

near-membrane

JavaScript Near Membrane Library that powers Lightning Locker Service
TypeScript
107
star
82

rng-kbqa

Python
105
star
83

MUST

PyTorch code for MUST
Python
103
star
84

fsnet

Python
101
star
85

bro-sysmon

How to Zeek Sysmon Logs!
Zeek
101
star
86

Timbermill

A better logging service
Java
99
star
87

best

🏆 Delightful Benchmarking & Performance Testing
TypeScript
95
star
88

eslint-config-lwc

Opinionated ESLint configurations for LWC projects
JavaScript
93
star
89

craft

CRAFT removes the language barrier to create Kubernetes Operators.
Go
91
star
90

AuditNLG

AuditNLG: Auditing Generative AI Language Modeling for Trustworthiness
Python
90
star
91

online_conformal

Methods for online conformal prediction.
Jupyter Notebook
90
star
92

lobster-pot

Scans every git push to your Github organisations to find unwanted secrets.
Go
88
star
93

violet-conversations

Sophisticated Conversational Applications/Bots
JavaScript
84
star
94

ml4ir

Machine Learning for Information Retrieval
Jupyter Notebook
84
star
95

apex-mockery

Lightweight mocking library in Apex
Apex
83
star
96

fast-influence-functions

Python
80
star
97

MoPro

MoPro: Webly Supervised Learning
Python
79
star
98

TaiChi

Open source library for few shot NLP
Python
79
star
99

helm-starter-istio

An Istio starter template for Helm
Shell
78
star
100

QAConv

This repository maintains the QAConv dataset, a question-answering dataset on informative conversations including business emails, panel discussions, and work channels.
Python
77
star