• Stars
    star
    7,326
  • Rank 5,285 (Top 0.2 %)
  • Language
    Python
  • License
    Other
  • Created over 4 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Code for the paper "Jukebox: A Generative Model for Music"

Status: Archive (code is provided as-is, no updates expected)

Jukebox

Code for "Jukebox: A Generative Model for Music"

Paper Blog Explorer Colab

Install

Install the conda package manager from https://docs.conda.io/en/latest/miniconda.html

# Required: Sampling
conda create --name jukebox python=3.7.5
conda activate jukebox
conda install mpi4py=3.0.3 # if this fails, try: pip install mpi4py==3.0.3
conda install pytorch=1.4 torchvision=0.5 cudatoolkit=10.0 -c pytorch
git clone https://github.com/openai/jukebox.git
cd jukebox
pip install -r requirements.txt
pip install -e .

# Required: Training
conda install av=7.0.01 -c conda-forge 
pip install ./tensorboardX
 
# Optional: Apex for faster training with fused_adam
conda install pytorch=1.1 torchvision=0.3 cudatoolkit=10.0 -c pytorch
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./apex

Sampling

Sampling from scratch

To sample normally, run the following command. Model can be 5b, 5b_lyrics, 1b_lyrics

python jukebox/sample.py --model=5b_lyrics --name=sample_5b --levels=3 --sample_length_in_seconds=20 \
--total_sample_length_in_seconds=180 --sr=44100 --n_samples=6 --hop_fraction=0.5,0.5,0.125
python jukebox/sample.py --model=1b_lyrics --name=sample_1b --levels=3 --sample_length_in_seconds=20 \
--total_sample_length_in_seconds=180 --sr=44100 --n_samples=16 --hop_fraction=0.5,0.5,0.125

The above generates the first sample_length_in_seconds seconds of audio from a song of total length total_sample_length_in_seconds. To use multiple GPU's, launch the above scripts as mpiexec -n {ngpus} python jukebox/sample.py ... so they use {ngpus}

The samples decoded from each level are stored in {name}/level_{level}. You can also view the samples as an html with the aligned lyrics under {name}/level_{level}/index.html. Run python -m http.server and open the html through the server to see the lyrics animate as the song plays.
A summary of all sampling data including zs, x, labels and sampling_kwargs is stored in {name}/level_{level}/data.pth.tar.

The hps are for a V100 GPU with 16 GB GPU memory. The 1b_lyrics, 5b, and 5b_lyrics top-level priors take up 3.8 GB, 10.3 GB, and 11.5 GB, respectively. The peak memory usage to store transformer key, value cache is about 400 MB for 1b_lyrics and 1 GB for 5b_lyrics per sample. If you are having trouble with CUDA OOM issues, try 1b_lyrics or decrease max_batch_size in sample.py, and --n_samples in the script call.

On a V100, it takes about 3 hrs to fully sample 20 seconds of music. Since this is a long time, it is recommended to use n_samples > 1 so you can generate as many samples as possible in parallel. The 1B lyrics and upsamplers can process 16 samples at a time, while 5B can fit only up to 3. Since the vast majority of time is spent on upsampling, we recommend using a multiple of 3 less than 16 like --n_samples 15 for 5b_lyrics. This will make the top-level generate samples in groups of three while upsampling is done in one pass.

To continue sampling from already generated codes for a longer duration, you can run

python jukebox/sample.py --model=5b_lyrics --name=sample_5b --levels=3 --mode=continue \
--codes_file=sample_5b/level_0/data.pth.tar --sample_length_in_seconds=40 --total_sample_length_in_seconds=180 \
--sr=44100 --n_samples=6 --hop_fraction=0.5,0.5,0.125

Here, we take the 20 seconds samples saved from the first sampling run at sample_5b/level_0/data.pth.tar and continue by adding 20 more seconds.

You could also continue directly from the level 2 saved outputs, just pass --codes_file=sample_5b/level_2/data.pth.tar. Note this will upsample the full 40 seconds song at the end.

If you stopped sampling at only the first level and want to upsample the saved codes, you can run

python jukebox/sample.py --model=5b_lyrics --name=sample_5b --levels=3 --mode=upsample \
--codes_file=sample_5b/level_2/data.pth.tar --sample_length_in_seconds=20 --total_sample_length_in_seconds=180 \
--sr=44100 --n_samples=6 --hop_fraction=0.5,0.5,0.125

Here, we take the 20 seconds samples saved from the first sampling run at sample_5b/level_2/data.pth.tar and upsample the lower two levels.

Prompt with your own music

If you want to prompt the model with your own creative piece or any other music, first save them as wave files and run

python jukebox/sample.py --model=5b_lyrics --name=sample_5b_prompted --levels=3 --mode=primed \
--audio_file=path/to/recording.wav,awesome-mix.wav,fav-song.wav,etc.wav --prompt_length_in_seconds=12 \
--sample_length_in_seconds=20 --total_sample_length_in_seconds=180 --sr=44100 --n_samples=6 --hop_fraction=0.5,0.5,0.125

This will load the four files, tile them to fill up to n_samples batch size, and prime the model with the first prompt_length_in_seconds seconds.

Training

VQVAE

To train a small vqvae, run

mpiexec -n {ngpus} python jukebox/train.py --hps=small_vqvae --name=small_vqvae --sample_length=262144 --bs=4 \
--audio_files_dir={audio_files_dir} --labels=False --train --aug_shift --aug_blend

Here, {audio_files_dir} is the directory in which you can put the audio files for your dataset, and {ngpus} is number of GPU's you want to use to train. The above trains a two-level VQ-VAE with downs_t = (5,3), and strides_t = (2, 2) meaning we downsample the audio by 2**5 = 32 to get the first level of codes, and 2**8 = 256 to get the second level codes.
Checkpoints are stored in the logs folder. You can monitor the training by running Tensorboard

tensorboard --logdir logs

Prior

Train prior or upsamplers

Once the VQ-VAE is trained, we can restore it from its saved checkpoint and train priors on the learnt codes. To train the top-level prior, we can run

mpiexec -n {ngpus} python jukebox/train.py --hps=small_vqvae,small_prior,all_fp16,cpu_ema --name=small_prior \
--sample_length=2097152 --bs=4 --audio_files_dir={audio_files_dir} --labels=False --train --test --aug_shift --aug_blend \
--restore_vqvae=logs/small_vqvae/checkpoint_latest.pth.tar --prior --levels=2 --level=1 --weight_decay=0.01 --save_iters=1000

To train the upsampler, we can run

mpiexec -n {ngpus} python jukebox/train.py --hps=small_vqvae,small_upsampler,all_fp16,cpu_ema --name=small_upsampler \
--sample_length=262144 --bs=4 --audio_files_dir={audio_files_dir} --labels=False --train --test --aug_shift --aug_blend \
--restore_vqvae=logs/small_vqvae/checkpoint_latest.pth.tar --prior --levels=2 --level=0 --weight_decay=0.01 --save_iters=1000

We pass sample_length = n_ctx * downsample_of_level so that after downsampling the tokens match the n_ctx of the prior hps. Here, n_ctx = 8192 and downsamples = (32, 256), giving sample_lengths = (8192 * 32, 8192 * 256) = (65536, 2097152) respectively for the bottom and top level.

Learning rate annealing

To get the best sample quality anneal the learning rate to 0 near the end of training. To do so, continue training from the latest checkpoint and run with

--restore_prior="path/to/checkpoint" --lr_use_linear_decay --lr_start_linear_decay={already_trained_steps} --lr_decay={decay_steps_as_needed}

Reuse pre-trained VQ-VAE and train top-level prior on new dataset from scratch.

Train without labels

Our pre-trained VQ-VAE can produce compressed codes for a wide variety of genres of music, and the pre-trained upsamplers can upsample them back to audio that sound very similar to the original audio. To re-use these for a new dataset of your choice, you can retrain just the top-level

To train top-level on a new dataset, run

mpiexec -n {ngpus} python jukebox/train.py --hps=vqvae,small_prior,all_fp16,cpu_ema --name=pretrained_vqvae_small_prior \
--sample_length=1048576 --bs=4 --aug_shift --aug_blend --audio_files_dir={audio_files_dir} \
--labels=False --train --test --prior --levels=3 --level=2 --weight_decay=0.01 --save_iters=1000

Training the small_prior with a batch size of 2, 4, and 8 requires 6.7 GB, 9.3 GB, and 15.8 GB of GPU memory, respectively. A few days to a week of training typically yields reasonable samples when the dataset is homogeneous (e.g. all piano pieces, songs of the same style, etc).

Near the end of training, follow this to anneal the learning rate to 0

Sample from new model

You can then run sample.py with the top-level of our models replaced by your new model. To do so,

  • Add an entry my_model=("vqvae", "upsampler_level_0", "upsampler_level_1", "small_prior") in MODELS in make_models.py.
  • Update the small_prior dictionary in hparams.py to include restore_prior='path/to/checkpoint'. If you you changed any hps directly in the command line script (eg:heads), make sure to update them in the dictionary too so that make_models restores our checkpoint correctly.
  • Run sample.py as outlined in the sampling section, but now with --model=my_model

For example, let's say we trained small_vqvae, small_prior, and small_upsampler under /path/to/jukebox/logs. In make_models.py, we are going to declare a tuple of the new models as my_model.

MODELS = {
    '5b': ("vqvae", "upsampler_level_0", "upsampler_level_1", "prior_5b"),
    '5b_lyrics': ("vqvae", "upsampler_level_0", "upsampler_level_1", "prior_5b_lyrics"),
    '1b_lyrics': ("vqvae", "upsampler_level_0", "upsampler_level_1", "prior_1b_lyrics"),
    'my_model': ("my_small_vqvae", "my_small_upsampler", "my_small_prior"),
}

Next, in hparams.py, we add them to the registry with the corresponding restore_paths and any other command line options used during training. Another important note is that for top-level priors with lyric conditioning, we have to locate a self-attention layer that shows alignment between the lyric and music tokens. Look for layers where prior.prior.transformer._attn_mods[layer].attn_func is either 6 or 7. If your model is starting to sing along lyrics, it means some layer, head pair has learned alignment. Congrats!

my_small_vqvae = Hyperparams(
    restore_vqvae='/path/to/jukebox/logs/small_vqvae/checkpoint_some_step.pth.tar',
)
my_small_vqvae.update(small_vqvae)
HPARAMS_REGISTRY["my_small_vqvae"] = my_small_vqvae

my_small_prior = Hyperparams(
    restore_prior='/path/to/jukebox/logs/small_prior/checkpoint_latest.pth.tar',
    level=1,
    labels=False,
    # TODO For the two lines below, if `--labels` was used and the model is
    # trained with lyrics, find and enter the layer, head pair that has learned
    # alignment.
    alignment_layer=47,
    alignment_head=0,
)
my_small_prior.update(small_prior)
HPARAMS_REGISTRY["my_small_prior"] = my_small_prior

my_small_upsampler = Hyperparams(
    restore_prior='/path/to/jukebox/logs/small_upsampler/checkpoint_latest.pth.tar',
    level=0,
    labels=False,
)
my_small_upsampler.update(small_upsampler)
HPARAMS_REGISTRY["my_small_upsampler"] = my_small_upsampler

Train with labels

To train with you own metadata for your audio files, implement get_metadata in data/files_dataset.py to return the artist, genre and lyrics for a given audio file. For now, you can pass '' for lyrics to not use any lyrics.

For training with labels, we'll use small_labelled_prior in hparams.py, and we set labels=True,labels_v3=True. We use 2 kinds of labels information:

  • Artist/Genre:
    • For each file, we return an artist_id and a list of genre_ids. The reason we have a list and not a single genre_id is that in v2, we split genres like blues_rock into a bag of words [blues, rock], and we pass atmost max_bow_genre_size of those, in v3 we consider it as a single word and just set max_bow_genre_size=1.
    • Update the v3_artist_ids and v3_genre_ids to use ids from your new dataset.
    • In small_labelled_prior, set the hps y_bins = (number_of_genres, number_of_artists) and max_bow_genre_size=1.
  • Timing:
    • For each chunk of audio, we return the total_length of the song, the offset the current audio chunk is at and the sample_length of the audio chunk. We have three timing embeddings: total_length, our current position, and our current position as a fraction of the total length, and we divide the range of these values into t_bins discrete bins.
    • In small_labelled_prior, set the hps min_duration and max_duration to be the shortest/longest duration of audio files you want for your dataset, and t_bins for how many bins you want to discretize timing information into. Note min_duration * sr needs to be at least sample_length to have an audio chunk in it.

After these modifications, to train a top-level with labels, run

mpiexec -n {ngpus} python jukebox/train.py --hps=vqvae,small_labelled_prior,all_fp16,cpu_ema --name=pretrained_vqvae_small_prior_labels \
--sample_length=1048576 --bs=4 --aug_shift --aug_blend --audio_files_dir={audio_files_dir} \
--labels=True --train --test --prior --levels=3 --level=2 --weight_decay=0.01 --save_iters=1000

For sampling, follow same instructions as above but use small_labelled_prior instead of small_prior.

Train with lyrics

To train in addition with lyrics, update get_metadata in data/files_dataset.py to return lyrics too. For training with lyrics, we'll use small_single_enc_dec_prior in hparams.py.

  • Lyrics:
    • For each file, we linearly align the lyric characters to the audio, find the position in lyric that corresponds to the midpoint of our audio chunk, and pass a window of n_tokens lyric characters centred around that.
    • In small_single_enc_dec_prior, set the hps use_tokens=True and n_tokens to be the number of lyric characters to use for an audio chunk. Set it according to the sample_length you're training on so that its large enough that the lyrics for an audio chunk are almost always found inside a window of that size.
    • If you use a non-English vocabulary, update text_processor.py with your new vocab and set n_vocab = number of characters in vocabulary accordingly in small_single_enc_dec_prior. In v2, we had a n_vocab=80 and in v3 we missed + and so n_vocab=79 of characters.

After these modifications, to train a top-level with labels and lyrics, run

mpiexec -n {ngpus} python jukebox/train.py --hps=vqvae,small_single_enc_dec_prior,all_fp16,cpu_ema --name=pretrained_vqvae_small_single_enc_dec_prior_labels \
--sample_length=786432 --bs=4 --aug_shift --aug_blend --audio_files_dir={audio_files_dir} \
--labels=True --train --test --prior --levels=3 --level=2 --weight_decay=0.01 --save_iters=1000

To simplify hps choices, here we used a single_enc_dec model like the 1b_lyrics model that combines both encoder and decoder of the transformer into a single model. We do so by merging the lyric vocab and vq-vae vocab into a single larger vocab, and flattening the lyric tokens and the vq-vae codes into a single sequence of length n_ctx + n_tokens. This uses attn_order=12 which includes prime_attention layers with keys/values from lyrics and queries from audio. If you instead want to use a model with the usual encoder-decoder style transformer, use small_sep_enc_dec_prior.

For sampling, follow same instructions as above but use small_single_enc_dec_prior instead of small_prior. To also get the alignment between lyrics and samples in the saved html, you'll need to set alignment_layer and alignment_head in small_single_enc_dec_prior. To find which layer/head is best to use, run a forward pass on a training example, save the attention weight tensors for all prime_attention layers, and pick the (layer, head) which has the best linear alignment pattern between the lyrics keys and music queries.

Fine-tune pre-trained top-level prior to new style(s)

Previously, we showed how to train a small top-level prior from scratch. Assuming you have a GPU with at least 15 GB of memory and support for fp16, you could fine-tune from our pre-trained 1B top-level prior. Here are the steps:

  • Support --labels=True by implementing get_metadata in jukebox/data/files_dataset.py for your dataset.
  • Add new entries in jukebox/data/ids. We recommend replacing existing mappings (e.g. rename "unknown", etc with styles of your choice). This uses the pre-trained style vectors as initialization and could potentially save some compute.

After these modifications, run

mpiexec -n {ngpus} python jukebox/train.py --hps=vqvae,prior_1b_lyrics,all_fp16,cpu_ema --name=finetuned \
--sample_length=1048576 --bs=1 --aug_shift --aug_blend --audio_files_dir={audio_files_dir} \
--labels=True --train --test --prior --levels=3 --level=2 --weight_decay=0.01 --save_iters=1000

To get the best sample quality, it is recommended to anneal the learning rate in the end. Training the 5B top-level requires GPipe which is not supported in this release.

Citation

Please cite using the following bibtex entry:

@article{dhariwal2020jukebox,
  title={Jukebox: A Generative Model for Music},
  author={Dhariwal, Prafulla and Jun, Heewoo and Payne, Christine and Kim, Jong Wook and Radford, Alec and Sutskever, Ilya},
  journal={arXiv preprint arXiv:2005.00341},
  year={2020}
}

License

Noncommercial Use License

It covers both released code and weights.

More Repositories

1

whisper

Robust Speech Recognition via Large-Scale Weak Supervision
Python
62,693
star
2

openai-cookbook

Examples and guides for using the OpenAI API
MDX
58,610
star
3

gym

A toolkit for developing and comparing reinforcement learning algorithms.
Python
34,442
star
4

CLIP

CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
Jupyter Notebook
22,966
star
5

openai-python

The official Python library for the OpenAI API
Python
22,561
star
6

gpt-2

Code for the paper "Language Models are Unsupervised Multitask Learners"
Python
21,450
star
7

chatgpt-retrieval-plugin

The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
Python
21,032
star
8

baselines

OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
Python
15,622
star
9

gpt-3

GPT-3: Language Models are Few-Shot Learners
15,573
star
10

swarm

Educational framework exploring ergonomic, lightweight multi-agent orchestration. Managed by OpenAI Solution team.
Python
14,944
star
11

evals

Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Python
14,607
star
12

tiktoken

tiktoken is a fast BPE tokeniser for use with OpenAI's models.
Python
11,374
star
13

triton

Development repository for the Triton language and compiler
C++
11,077
star
14

DALL-E

PyTorch package for the discrete VAE used for DALL·E.
Python
10,760
star
15

shap-e

Generate 3D objects conditioned on text or images
Python
10,285
star
16

spinningup

An educational resource to help anyone learn deep reinforcement learning.
Python
8,587
star
17

openai-node

The official Node.js / Typescript library for the OpenAI API
TypeScript
7,703
star
18

universe

Universe: a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications.
Python
7,385
star
19

point-e

Point cloud diffusion for 3D model synthesis
Python
5,777
star
20

consistency_models

Official repo for consistency models.
Python
5,725
star
21

guided-diffusion

Python
5,000
star
22

plugins-quickstart

Get a ChatGPT plugin up and running in under 5 minutes!
Python
4,133
star
23

transformer-debugger

Python
4,003
star
24

retro

Retro Games in Gym
C
3,361
star
25

glide-text2im

GLIDE: a diffusion-based text-conditional image synthesis model
Python
3,277
star
26

glow

Code for reproducing results in "Glow: Generative Flow with Invertible 1x1 Convolutions"
Python
3,016
star
27

mujoco-py

MuJoCo is a physics engine for detailed, efficient rigid body simulations with contacts. mujoco-py allows using MuJoCo from Python 3.
Cython
2,586
star
28

openai-quickstart-node

Node.js example app from the OpenAI API quickstart tutorial
JavaScript
2,534
star
29

weak-to-strong

Python
2,445
star
30

improved-gan

Code for the paper "Improved Techniques for Training GANs"
Python
2,218
star
31

human-eval

Code for the paper "Evaluating Large Language Models Trained on Code"
Python
2,204
star
32

improved-diffusion

Release for Improved Denoising Diffusion Probabilistic Models
Python
2,102
star
33

roboschool

DEPRECATED: Open-source software for robot simulation, integrated with OpenAI Gym.
Python
2,064
star
34

image-gpt

Python
2,025
star
35

consistencydecoder

Consistency Distilled Diff VAE
Python
1,933
star
36

finetune-transformer-lm

Code and model for the paper "Improving Language Understanding by Generative Pre-Training"
Python
1,929
star
37

gpt-2-output-dataset

Dataset of GPT-2 outputs for research in detection, biases, and more
Python
1,908
star
38

multiagent-particle-envs

Code for a multi-agent particle environment used in the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
Python
1,871
star
39

pixel-cnn

Code for the paper "PixelCNN++: A PixelCNN Implementation with Discretized Logistic Mixture Likelihood and Other Modifications"
Python
1,856
star
40

openai-quickstart-python

Python example app from the OpenAI API quickstart tutorial
1,685
star
41

requests-for-research

A living collection of deep learning problems
HTML
1,625
star
42

multi-agent-emergence-environments

Environment generation code for the paper "Emergent Tool Use From Multi-Agent Autocurricula"
Python
1,590
star
43

gpt-discord-bot

Example Discord bot written in Python that uses the completions API to have conversations with the `text-davinci-003` model, and the moderations API to filter the messages.
Python
1,569
star
44

evolution-strategies-starter

Code for the paper "Evolution Strategies as a Scalable Alternative to Reinforcement Learning"
Python
1,504
star
45

generating-reviews-discovering-sentiment

Code for "Learning to Generate Reviews and Discovering Sentiment"
Python
1,491
star
46

neural-mmo

Code for the paper "Neural MMO: A Massively Multiagent Game Environment for Training and Evaluating Intelligent Agents"
Python
1,463
star
47

prm800k

800,000 step-level correctness labels on LLM solutions to MATH problems
Python
1,371
star
48

openai-dotnet

The official .NET library for the OpenAI API
C#
1,352
star
49

openai-assistants-quickstart

OpenAI Assistants API quickstart with Next.js.
TypeScript
1,350
star
50

sparse_attention

Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"
Python
1,347
star
51

maddpg

Code for the MADDPG algorithm from the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
Python
1,284
star
52

Video-Pre-Training

Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos
Python
1,280
star
53

openai-openapi

OpenAPI specification for the OpenAI API
1,235
star
54

lm-human-preferences

Code for the paper Fine-Tuning Language Models from Human Preferences
Python
1,185
star
55

following-instructions-human-feedback

1,129
star
56

universe-starter-agent

A starter agent that can solve a number of universe environments.
Python
1,086
star
57

dalle-2-preview

1,044
star
58

InfoGAN

Code for reproducing key results in the paper "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets"
Python
1,029
star
59

grade-school-math

Python
1,005
star
60

procgen

Procgen Benchmark: Procedurally-Generated Game-Like Gym-Environments
C++
1,005
star
61

supervised-reptile

Code for the paper "On First-Order Meta-Learning Algorithms"
JavaScript
955
star
62

blocksparse

Efficient GPU kernels for block-sparse matrix multiplication and convolution
Cuda
941
star
63

automated-interpretability

Python
896
star
64

random-network-distillation

Code for the paper "Exploration by Random Network Distillation"
Python
861
star
65

kubernetes-ec2-autoscaler

A batch-optimized scaling manager for Kubernetes
Python
849
star
66

summarize-from-feedback

Code for "Learning to summarize from human feedback"
Python
833
star
67

large-scale-curiosity

Code for the paper "Large-Scale Study of Curiosity-Driven Learning"
Python
800
star
68

multiagent-competition

Code for the paper "Emergent Complexity via Multi-agent Competition"
Python
761
star
69

imitation

Code for the paper "Generative Adversarial Imitation Learning"
Python
643
star
70

deeptype

Code for the paper "DeepType: Multilingual Entity Linking by Neural Type System Evolution"
Python
633
star
71

mlsh

Code for the paper "Meta-Learning Shared Hierarchies"
Python
588
star
72

iaf

Code for reproducing key results in the paper "Improving Variational Inference with Inverse Autoregressive Flow"
Python
499
star
73

mujoco-worldgen

Automatic object XML generation for Mujoco
Python
489
star
74

safety-gym

Tools for accelerating safe exploration research.
Python
421
star
75

vdvae

Repository for the paper "Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images"
Python
407
star
76

coinrun

Code for the paper "Quantifying Transfer in Reinforcement Learning"
C++
390
star
77

robogym

Robotics Gym Environments
Python
389
star
78

weightnorm

Example code for Weight Normalization, from "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks"
Python
357
star
79

atari-py

A packaged and slightly-modified version of https://github.com/bbitmaster/ale_python_interface
C++
354
star
80

openai-security-bots

Python
351
star
81

openai-gemm

Open single and half precision gemm implementations
C
335
star
82

vime

Code for the paper "Curiosity-driven Exploration in Deep Reinforcement Learning via Bayesian Neural Networks"
Python
331
star
83

safety-starter-agents

Basic constrained RL agents used in experiments for the "Benchmarking Safe Exploration in Deep Reinforcement Learning" paper.
Python
312
star
84

ebm_code_release

Code for Implicit Generation and Generalization with Energy Based Models
Python
311
star
85

CLIP-featurevis

code for reproducing some of the diagrams in the paper "Multimodal Neurons in Artificial Neural Networks"
Python
294
star
86

gym-http-api

API to access OpenAI Gym from other languages via HTTP
Python
292
star
87

gym-soccer

Python
289
star
88

sparse_autoencoder

Python
287
star
89

robosumo

Code for the paper "Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments"
Python
283
star
90

web-crawl-q-and-a-example

Learn how to crawl your website and build a Q/A bot with the OpenAI API
Jupyter Notebook
268
star
91

phasic-policy-gradient

Code for the paper "Phasic Policy Gradient"
Python
245
star
92

EPG

Code for the paper "Evolved Policy Gradients"
Python
240
star
93

orrb

Code for the paper "OpenAI Remote Rendering Backend"
C#
235
star
94

miniF2F

Formal to Formal Mathematics Benchmark
Objective-C++
202
star
95

atari-reset

Code for the blog post "Learning Montezuma’s Revenge from a Single Demonstration"
Python
183
star
96

spinningup-workshop

For educational materials related to the spinning up workshops.
TeX
181
star
97

train-procgen

Code for the paper "Leveraging Procedural Generation to Benchmark Reinforcement Learning"
Python
170
star
98

human-eval-infilling

Code for the paper "Efficient Training of Language Models to Fill in the Middle"
Python
162
star
99

openai-go

The official Go library for the OpenAI API
Go
145
star
100

dallify-discord-bot

Example code for using OpenAI’s NodeJS SDK with discord.js SDK to create a Discord Bot that uses Slash Commands.
TypeScript
139
star