• Stars
    star
    183
  • Rank 210,154 (Top 5 %)
  • Language
    Python
  • License
    MIT License
  • Created over 6 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Code for the blog post "Learning Montezuma’s Revenge from a Single Demonstration"

Status: Archive (code is provided as-is, no updates expected)

Learn RL policies on Atari by resetting from a demonstration

Codebase for learning to play Atari from demonstrations. Contrary to other work on learning from demonstrations we learn to maximize the score using pure RL, rather than trying to imitate the demo.

All learning is done through RL on the regular Atari environments, but we automatically build a curriculum for our agent by starting the rollouts from points in a demonstration provided by a human expert: We start by having each RL episode begin near the end of the demonstration. Once the agent is able to beat or at least tie the score of the demonstrator on the remaining part of the game, in at least 20% of the rollouts, we slowly move the starting point back in time. We keep doing this until the agent is playing from the start of the game, without using the demo at all, at which point we have an RL-trained agent beating or tying the human expert on the entire game.

Impression of our agent learning to reach the first key in Montezuma’s Revenge using RL and starting each episode from a demonstration state. When our agent starts playing the game, we place it right in front of the key, requiring it to only take a single jump to find success. After our agent has learned to do this consistently, we slowly move the starting point back in time. Our agent might then find itself halfway up the ladder that leads to the key. Once it learns to climb the ladder from there, we can have it start at the point where it needs to jump over the skull. After it learns to do that, we can have it start on the rope leading to the floor of the room, etc. Eventually, the agent starts in the original starting state of the game and is able to reach the key completely by itself.

Replaying demo transitions

When resetting to a state from the demonstration and when using recurrent policies, we need to make sure that the hidden state of the agent accurately reflects the recent game history: simply resetting the state to zero is not sufficient. At every episode we therefore recompute the hidden state from the last few transitions in the demonstration preceding the selected starting state.

PPO implementation

Our PPO implementation is derived from the one in OpenAI Baselines. We use generalized advantage estimation with a lambda of 0.95 and gamma between 0.999 and 0.9999. For every minibatch we process during training we recompute the hidden state of our policy at the start of that minibatch, rather than just using the value we had computed using the previous set of parameters: effectively this comes down to using a larger minibatch in the time dimension, and throwing away the first part of the batch when calculating the value loss and policy loss.

How to use

Training is performed using the train_atari.py script. The code uses MPI (using Horovod) for distributed training. We recommend running on at least 8 GPUs, preferably more: we used 128 for Montezuma. The default hyperparameter settings work well for Montezuma. When training stops making progress you should lower the learning rate and entropy coefficient to help the agent get unstuck.

Results

So far we have been able to train an agent to achieve a high score of 74,500 on Montezuma's Revenge from a single demonstration, better than any previously published result. The resulting policy is reasonably robust, achieving a score of 10,000 when evaluating with sticky frames and 8,400 with epsilon greedy noise where epsilon=0.01, also the best published so far.

Our agent playing Montezuma’s Revenge

Our agent playing Montezuma’s Revenge. The agent achieves a final score of 74,500 over approximately 12 minutes of play (video is double speed). Although much of the agent’s game mirrors the demonstration, the agent surpasses the demonstration score of 71,500 by picking up more diamonds along the way. In addition, the agent learns to exploit a flaw in the emulator to make a key re-appear at minute 4:25 of the video, something not present in the demonstration.

The trained model for Montezuma's Revenge can be downloaded here.

Remaining challenges

The algorithm is still fragile: some runs don't converge for Montezuma's Revenge, and the one that did converge required running at large scale, with a restart from a checkpoint halfway. We have not yet been able to match expert performance on Gravitar and Pitfall.

The demos

The repo includes demos for Montezuma's Revenge, PrivateEye, Pitfall, Gravitar, and Pong. These demonstrations were obtained by playing tool-assisted using this code.

Related work

Our main insight is that we can make our RL problem easier to solve by decomposing it into a curriculum of subtasks requiring short action sequences; we construct this curriculum by starting each RL episode from a demonstration state. A variant of the same idea was used recently for reverse curriculum generation for robotics, where a curriculum was constructed by iteratively perturbing a set of starting states using random actions, and selecting the resulting states with the right level of difficulty.

Starting episodes by resetting from demonstration states was previously proposed, but without constructing a curriculum that gradually moves the starting state back from the end of the demonstration to the beginning. When combined with imitation learning, several researchers report benefit from this approach. For our use case we found such a curriculum to be vitally important for deriving benefit from the demonstration.

Recently, DeepMind has shown an agent learning Montezuma's Revenge by imitation learning from a demonstration; one approach trains an agent to achieve the same states seen in a YouTube video of Montezuma's Revenge, and another technique combines a sophisticated version of Q-learning with maximizing the likelihood of actions taken in a demonstration. The advantage of these approaches is that they do not require as much control over the environment our technique does: they do not reset the environment to states other than the starting state of the game, and they do not presume access to the full game states encountered in the demonstration. Our method differs by directly optimizing what we care about, the game score, rather than making the agent imitate the demonstration; our method does not have the problem of overfitting to a sub-optimal demonstration and could offer benefits in multi-player games where we want to optimize performance against other opponents than just the one from the demonstration.

More Repositories

1

whisper

Robust Speech Recognition via Large-Scale Weak Supervision
Python
62,693
star
2

openai-cookbook

Examples and guides for using the OpenAI API
MDX
58,610
star
3

gym

A toolkit for developing and comparing reinforcement learning algorithms.
Python
34,442
star
4

CLIP

CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
Jupyter Notebook
22,966
star
5

openai-python

The official Python library for the OpenAI API
Python
22,561
star
6

gpt-2

Code for the paper "Language Models are Unsupervised Multitask Learners"
Python
21,450
star
7

chatgpt-retrieval-plugin

The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
Python
21,032
star
8

baselines

OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
Python
15,622
star
9

gpt-3

GPT-3: Language Models are Few-Shot Learners
15,573
star
10

swarm

Educational framework exploring ergonomic, lightweight multi-agent orchestration. Managed by OpenAI Solution team.
Python
14,944
star
11

evals

Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Python
14,607
star
12

tiktoken

tiktoken is a fast BPE tokeniser for use with OpenAI's models.
Python
11,374
star
13

triton

Development repository for the Triton language and compiler
C++
11,077
star
14

DALL-E

PyTorch package for the discrete VAE used for DALL·E.
Python
10,760
star
15

shap-e

Generate 3D objects conditioned on text or images
Python
10,285
star
16

spinningup

An educational resource to help anyone learn deep reinforcement learning.
Python
8,587
star
17

openai-node

The official Node.js / Typescript library for the OpenAI API
TypeScript
7,703
star
18

universe

Universe: a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications.
Python
7,385
star
19

jukebox

Code for the paper "Jukebox: A Generative Model for Music"
Python
7,326
star
20

point-e

Point cloud diffusion for 3D model synthesis
Python
5,777
star
21

consistency_models

Official repo for consistency models.
Python
5,725
star
22

guided-diffusion

Python
5,000
star
23

plugins-quickstart

Get a ChatGPT plugin up and running in under 5 minutes!
Python
4,133
star
24

transformer-debugger

Python
4,003
star
25

retro

Retro Games in Gym
C
3,361
star
26

glide-text2im

GLIDE: a diffusion-based text-conditional image synthesis model
Python
3,277
star
27

glow

Code for reproducing results in "Glow: Generative Flow with Invertible 1x1 Convolutions"
Python
3,016
star
28

mujoco-py

MuJoCo is a physics engine for detailed, efficient rigid body simulations with contacts. mujoco-py allows using MuJoCo from Python 3.
Cython
2,586
star
29

openai-quickstart-node

Node.js example app from the OpenAI API quickstart tutorial
JavaScript
2,534
star
30

weak-to-strong

Python
2,445
star
31

improved-gan

Code for the paper "Improved Techniques for Training GANs"
Python
2,218
star
32

human-eval

Code for the paper "Evaluating Large Language Models Trained on Code"
Python
2,204
star
33

improved-diffusion

Release for Improved Denoising Diffusion Probabilistic Models
Python
2,102
star
34

roboschool

DEPRECATED: Open-source software for robot simulation, integrated with OpenAI Gym.
Python
2,064
star
35

image-gpt

Python
2,025
star
36

consistencydecoder

Consistency Distilled Diff VAE
Python
1,933
star
37

finetune-transformer-lm

Code and model for the paper "Improving Language Understanding by Generative Pre-Training"
Python
1,929
star
38

gpt-2-output-dataset

Dataset of GPT-2 outputs for research in detection, biases, and more
Python
1,908
star
39

multiagent-particle-envs

Code for a multi-agent particle environment used in the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
Python
1,871
star
40

pixel-cnn

Code for the paper "PixelCNN++: A PixelCNN Implementation with Discretized Logistic Mixture Likelihood and Other Modifications"
Python
1,856
star
41

openai-quickstart-python

Python example app from the OpenAI API quickstart tutorial
1,685
star
42

requests-for-research

A living collection of deep learning problems
HTML
1,625
star
43

multi-agent-emergence-environments

Environment generation code for the paper "Emergent Tool Use From Multi-Agent Autocurricula"
Python
1,590
star
44

gpt-discord-bot

Example Discord bot written in Python that uses the completions API to have conversations with the `text-davinci-003` model, and the moderations API to filter the messages.
Python
1,569
star
45

evolution-strategies-starter

Code for the paper "Evolution Strategies as a Scalable Alternative to Reinforcement Learning"
Python
1,504
star
46

generating-reviews-discovering-sentiment

Code for "Learning to Generate Reviews and Discovering Sentiment"
Python
1,491
star
47

neural-mmo

Code for the paper "Neural MMO: A Massively Multiagent Game Environment for Training and Evaluating Intelligent Agents"
Python
1,463
star
48

prm800k

800,000 step-level correctness labels on LLM solutions to MATH problems
Python
1,371
star
49

openai-dotnet

The official .NET library for the OpenAI API
C#
1,352
star
50

openai-assistants-quickstart

OpenAI Assistants API quickstart with Next.js.
TypeScript
1,350
star
51

sparse_attention

Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"
Python
1,347
star
52

maddpg

Code for the MADDPG algorithm from the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
Python
1,284
star
53

Video-Pre-Training

Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos
Python
1,280
star
54

openai-openapi

OpenAPI specification for the OpenAI API
1,235
star
55

lm-human-preferences

Code for the paper Fine-Tuning Language Models from Human Preferences
Python
1,185
star
56

following-instructions-human-feedback

1,129
star
57

universe-starter-agent

A starter agent that can solve a number of universe environments.
Python
1,086
star
58

dalle-2-preview

1,044
star
59

InfoGAN

Code for reproducing key results in the paper "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets"
Python
1,029
star
60

grade-school-math

Python
1,005
star
61

procgen

Procgen Benchmark: Procedurally-Generated Game-Like Gym-Environments
C++
1,005
star
62

supervised-reptile

Code for the paper "On First-Order Meta-Learning Algorithms"
JavaScript
955
star
63

blocksparse

Efficient GPU kernels for block-sparse matrix multiplication and convolution
Cuda
941
star
64

automated-interpretability

Python
896
star
65

random-network-distillation

Code for the paper "Exploration by Random Network Distillation"
Python
861
star
66

kubernetes-ec2-autoscaler

A batch-optimized scaling manager for Kubernetes
Python
849
star
67

summarize-from-feedback

Code for "Learning to summarize from human feedback"
Python
833
star
68

large-scale-curiosity

Code for the paper "Large-Scale Study of Curiosity-Driven Learning"
Python
800
star
69

multiagent-competition

Code for the paper "Emergent Complexity via Multi-agent Competition"
Python
761
star
70

imitation

Code for the paper "Generative Adversarial Imitation Learning"
Python
643
star
71

deeptype

Code for the paper "DeepType: Multilingual Entity Linking by Neural Type System Evolution"
Python
633
star
72

mlsh

Code for the paper "Meta-Learning Shared Hierarchies"
Python
588
star
73

iaf

Code for reproducing key results in the paper "Improving Variational Inference with Inverse Autoregressive Flow"
Python
499
star
74

mujoco-worldgen

Automatic object XML generation for Mujoco
Python
489
star
75

safety-gym

Tools for accelerating safe exploration research.
Python
421
star
76

vdvae

Repository for the paper "Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images"
Python
407
star
77

coinrun

Code for the paper "Quantifying Transfer in Reinforcement Learning"
C++
390
star
78

robogym

Robotics Gym Environments
Python
389
star
79

weightnorm

Example code for Weight Normalization, from "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks"
Python
357
star
80

atari-py

A packaged and slightly-modified version of https://github.com/bbitmaster/ale_python_interface
C++
354
star
81

openai-security-bots

Python
351
star
82

openai-gemm

Open single and half precision gemm implementations
C
335
star
83

vime

Code for the paper "Curiosity-driven Exploration in Deep Reinforcement Learning via Bayesian Neural Networks"
Python
331
star
84

safety-starter-agents

Basic constrained RL agents used in experiments for the "Benchmarking Safe Exploration in Deep Reinforcement Learning" paper.
Python
312
star
85

ebm_code_release

Code for Implicit Generation and Generalization with Energy Based Models
Python
311
star
86

CLIP-featurevis

code for reproducing some of the diagrams in the paper "Multimodal Neurons in Artificial Neural Networks"
Python
294
star
87

gym-http-api

API to access OpenAI Gym from other languages via HTTP
Python
292
star
88

gym-soccer

Python
289
star
89

sparse_autoencoder

Python
287
star
90

robosumo

Code for the paper "Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments"
Python
283
star
91

web-crawl-q-and-a-example

Learn how to crawl your website and build a Q/A bot with the OpenAI API
Jupyter Notebook
268
star
92

phasic-policy-gradient

Code for the paper "Phasic Policy Gradient"
Python
245
star
93

EPG

Code for the paper "Evolved Policy Gradients"
Python
240
star
94

orrb

Code for the paper "OpenAI Remote Rendering Backend"
C#
235
star
95

miniF2F

Formal to Formal Mathematics Benchmark
Objective-C++
202
star
96

spinningup-workshop

For educational materials related to the spinning up workshops.
TeX
181
star
97

train-procgen

Code for the paper "Leveraging Procedural Generation to Benchmark Reinforcement Learning"
Python
170
star
98

human-eval-infilling

Code for the paper "Efficient Training of Language Models to Fill in the Middle"
Python
162
star
99

openai-go

The official Go library for the OpenAI API
Go
145
star
100

dallify-discord-bot

Example code for using OpenAI’s NodeJS SDK with discord.js SDK to create a Discord Bot that uses Slash Commands.
TypeScript
139
star