• Stars
    star
    406
  • Rank 106,421 (Top 3 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created about 4 years ago
  • Updated 12 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

DQN Zoo is a collection of reference implementations of reinforcement learning agents developed at DeepMind based on the Deep Q-Network (DQN) agent.

DQN Zoo

DQN Zoo is a collection of reference implementations of reinforcement learning agents developed at DeepMind based on the Deep Q-Network (DQN) agent.

It aims to be research-friendly, self-contained and readable. Each agent is implemented using JAX, Haiku and RLax, and is a best-effort replication of the corresponding paper implementation. Each agent reproduces results on the standard set of 57 Atari games, on average.

Directory Paper
dqn Human Level Control Through Deep Reinforcement Learning
double_q Deep Reinforcement Learning with Double Q-learning
prioritized Prioritized Experience Replay
c51 A Distributional Perspective on Reinforcement Learning
qrdqn Distributional Reinforcement Learning with Quantile Regression
rainbow Rainbow: Combining Improvements in Deep Reinforcement Learning
iqn Implicit Quantile Networks for Distributional Reinforcement Learning

Plot of median human-normalized score over all 57 Atari games for each agent:

Plot summary

Quick start

NOTE: Only Python 3.9 and above and Linux is supported.

Follow these steps to quickly clone the DQN Zoo repository, install all required dependencies and start running DQN. Prerequisites for these steps are a NVIDIA GPU with recent CUDA drivers.

  1. Install Docker version 19.03 or later (for the --gpus flag).

  2. Install NVIDIA Container Toolkit.

  3. Enable sudoless docker.

  4. Verify the previous steps were successful by running:
    docker run --gpus all --rm nvidia/cuda:11.1-base nvidia-smi

  5. Download the script run.sh. This automatically downloads the Atari ROMs from http://www.atarimania.com. The ROMs are available here for free but make sure the respective license covers your particular use case.

Running this script will:

1.  Clone the DQN Zoo repository.
1.  Build a Docker image with all necessary dependencies and run unit tests.
1.  Start a short run of DQN on Pong in a GPU-accelerated container.

NOTE: run.sh, Dockerfile and docker_requirements.txt together provide a self-contained example of the dependencies and commands needed to run an agent in DQN Zoo. Using Docker is not a requirement and if Dockerfile is not used then the list of dependencies to install may have to be adapted depending on your environment. Also it is not a hard requirement to run on the GPU. Agents can be run on the CPU by specifying the flag --jax_platform_name=cpu.

Goals

  • Serve as a collection of reference implementations of DQN-based agents developed at DeepMind.
  • Reproduce results reported in papers, on average.
  • Implement agents purely in Python, using JAX, Haiku and RLax.
  • Have minimal dependencies.
  • Be easy to read.
  • Be easy to modify and customize after forking.

Non-goals

  • Be a library or framework (these agents are intended to be forked for research).
  • Be flexible, general and support multiple use cases (at odds with understandability).
  • Support many environments (users can easily add new ones).
  • Include every DQN variant that exists.
  • Incorporate many cool libraries (harder to read, easy for the user to do this after forking, different users prefer different libraries, less self-contained).
  • Optimize speed and efficiency at the cost of readability or matching algorithmic details in the papers (no C++, keep to a single stream of experience).

Code structure

  • Each directory contains a published DQN variant configured to run on Atari.
  • agent.py in each agent directory contains an agent class that includes reset(), step(), get_state(), set_state() methods.
  • parts.py contains functions and classes used by many of the agents including classes for accumulating statistics and the main training and evaluation loop run_loop().
  • replay.py contains functions and classes relating to experience replay.
  • networks.py contains Haiku networks used by the agents.
  • processors.py contains components for standard Atari preprocessing.

Implementation notes

Generally we went with a flatter approach for easier code comprehension. Excessive nesting, indirection and generalization have been avoided, but not to the extreme of having a single file per agent. This has resulted in some degree of code duplication, but this is less of a maintenance issue as the code base is intended to be relatively static.

Some implementation details:

  • The main training and evaluation loop parts.run_loop() is implemented as a generator to decouple it from other concerns like logging statistics and checkpointing.
  • We adopted the pattern of returning a new JAX PRNG key from jitted functions. This allows for splitting keys inside jitted functions which is currently more efficient than splitting outside and passing a key in.
  • Agent functions to be jitted are defined inline in the agent class __init__() instead of as decorated class methods. This emphasizes such functions should be free of side-effects; class methods are generally not pure as they often alter the class instance.
  • parts.NullCheckpoint is a placeholder for users to optionally plug in a checkpointing library appropriate for the file system they are using. This would allow resuming an interrupted training run.
  • The preprocessing and action repeat logic lives inside each agent. Doing this instead of taking the common approach of environment wrappers allows the run loop to see the "true" timesteps. This makes things like recording performance statistics and videos easier since the unmodified rewards and observations are readily available. It also allows us to express all relevant flag values in terms of environment frames, instead of a more confusing mix of environment frames and learning steps.

Learning curves

Learning curve data is included in results.tar.gz. The archive contains a CSV file for each agent, with statistics logged during training runs. These training runs span the standard set of 57 Atari games, 5 seeds each, using default agent settings. Note Gym was used instead of Xitari.

These CSV files can be theoretically equivalently generated by the following pseudocode:

for agent in "${AGENTS[@]}"; do
  for game in "${ATARI_GAMES[@]}"; do
    for seed in {1..5}; do
      python -m "dqn_zoo.${agent}.run_atari" \
          --environment_name="${game}" \
          --seed="${seed}" \
          --results_csv_path="/tmp/dqn_zoo/${agent}/${game}/${seed}/results.csv"
    done
  done
done

Each agent CSV file in results.tar.gz is then a concatenation of all associated results.csv files, with additional environment_name and seed fields. Note the learning curve data is missing state_value since logging for this quantity was added after the data was generated.

Plots show the average score at periodic evaluation phases during training. Each episode during evaluation starts with up to 30 random no-op actions and lasts a maximum of 30 minutes. To make the plots more readable, scores have been smoothed using a moving average with window size 10.

Plot of average score on each individual Atari game for each agent:

Plot individual

FAQ

Q: Do these agents replicate results from their respective papers?

We aim to replicate the mean and median human normalized score over all 57 Atari games and to implement the algorithm described in each paper as closely as possible.

However there are potential sources of differences at the level of an individual game. These include:

Q: Is the execution of these agents deterministic?

We try to allow for it on CPU. However it is easily broken and note that convolutions on GPU are not deterministic. To allow for determinism we:

  • Build a new environment at the start of every iteration.
  • Include in the training state:
    • Random number generator state.
    • Target network parameters (in addition to online network parameters).
    • Evaluation agent.

Q: Why is DQN-based agent X not included?

There was a bias towards implementing the variants the authors are most familiar with. Also one or more of the following reasons may apply:

  • Did not get round to implementing X.
  • Have yet to replicate the algorithmic details and learning performance of X.
  • It is easy to create X from components in DQN Zoo.

Q: Why not incorporate library / environment X?

X is probably very useful, but every additional library or feature is another thing new users need to read and understand. Also everyone differs in the auxiliary libraries they like to use. So the recommendation is to fork the agent you want and incorporate the features you wish in the copy. This also gives us the usual benefits of keeping dependencies to a minimum.

Q: Can I generalize X, then I can do Y with minimal modifications?

Code generalization often makes code harder to read. This is not intended to be a library in the sense that you import an agent and inject customized components to do research. Instead it is designed to be easy to customize after forking. So rather than be everything for everyone, we aimed to keep things minimal. Then users can fork and generalize in the directions they specifically care about.

Q: Why Gym instead of Xitari?

Most DeepMind papers with experiments on Atari published results on Xitari, a fork of the Arcade Learning Environment (ALE). The learning performance of agents in DQN Zoo were also verified on Xitari. However since Gym and the ALE are more widely used we have chosen to open source DQN Zoo using Gym. This does introduce another source of differences, though the settings for the Gym Atari environments have been chosen so they behave as similar as possible to Xitari.

Contributing

Note we are currently not accepting contributions. See CONTRIBUTING.md for details.

Citing DQN Zoo

If you use DQN Zoo in your research, please cite the papers corresponding to the agents used and this repository:

@software{dqnzoo2020github,
  title = {{DQN} {Zoo}: Reference implementations of {DQN}-based agents},
  author = {John Quan and Georg Ostrovski},
  url = {http://github.com/deepmind/dqn_zoo},
  version = {1.2.0},
  year = {2020},
}

More Repositories

1

deepmind-research

This repository contains implementations and illustrative code to accompany DeepMind publications
Jupyter Notebook
13,132
star
2

alphafold

Open source code for AlphaFold.
Python
12,602
star
3

sonnet

TensorFlow-based neural network library
Python
9,769
star
4

mujoco

Multi-Joint dynamics with Contact. A general purpose physics simulator.
Jupyter Notebook
8,113
star
5

pysc2

StarCraft II Learning Environment
Python
8,001
star
6

lab

A customisable 3D platform for agent-based AI research
C
7,101
star
7

graph_nets

Build Graph Nets in Tensorflow
Python
5,352
star
8

graphcast

Python
4,517
star
9

open_spiel

OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.
C++
4,231
star
10

alphageometry

Python
4,079
star
11

learning-to-learn

Learning to Learn in TensorFlow
Python
4,064
star
12

dm_control

Google DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.
Python
3,793
star
13

acme

A library of reinforcement learning components and agents
Python
3,466
star
14

trfl

TensorFlow Reinforcement Learning
Python
3,136
star
15

dm-haiku

JAX-based neural network library
Python
2,848
star
16

alphatensor

Python
2,670
star
17

dnc

A TensorFlow implementation of the Differentiable Neural Computer.
Python
2,478
star
18

gemma

Open weights LLM from Google DeepMind.
Python
2,421
star
19

mctx

Monte Carlo tree search in JAX
Python
2,313
star
20

code_contests

C++
2,064
star
21

optax

Optax is a gradient processing and optimization library for JAX.
Python
1,670
star
22

kinetics-i3d

Convolutional neural network model for video classification trained on the Kinetics dataset.
Python
1,639
star
23

penzai

A JAX research toolkit for building, editing, and visualizing neural networks.
Python
1,639
star
24

mathematics_dataset

This dataset code generates mathematical question and answer pairs, from a range of question types at roughly school-level difficulty.
Python
1,621
star
25

bsuite

bsuite is a collection of carefully-designed experiments that investigate core capabilities of a reinforcement learning (RL) agent
Python
1,497
star
26

educational

Jupyter Notebook
1,398
star
27

jraph

A Graph Neural Network Library in Jax
Python
1,349
star
28

rc-data

Question answering dataset featured in "Teaching Machines to Read and Comprehend
Python
1,285
star
29

mujoco_menagerie

A collection of high-quality models for the MuJoCo physics engine, curated by Google DeepMind.
Jupyter Notebook
1,278
star
30

tapnet

Tracking Any Point (TAP)
Jupyter Notebook
1,266
star
31

rlax

Python
1,223
star
32

scalable_agent

A TensorFlow implementation of Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures.
Python
981
star
33

android_env

RL research on Android devices.
Python
977
star
34

neural-processes

This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).
Jupyter Notebook
969
star
35

mujoco_mpc

Real-time behaviour synthesis with MuJoCo, using Predictive Control
C++
959
star
36

dramatron

Dramatron uses large language models to generate coherent scripts and screenplays.
Jupyter Notebook
947
star
37

tree

tree is a library for working with nested data structures
Python
925
star
38

materials_discovery

Jupyter Notebook
866
star
39

xmanager

A platform for managing machine learning experiments
Python
815
star
40

open_x_embodiment

Jupyter Notebook
785
star
41

chex

Python
751
star
42

ferminet

An implementation of the Fermionic Neural Network for ab-initio electronic structure calculations
Python
707
star
43

reverb

Reverb is an efficient and easy-to-use data storage and transport system designed for machine learning research
C++
700
star
44

funsearch

Jupyter Notebook
699
star
45

alphadev

Python
688
star
46

pycolab

A highly-customisable gridworld game engine with some batteries included. Make your own gridworld games to test reinforcement learning agents!
Python
659
star
47

concordia

A library for generative social simulation
Python
634
star
48

hanabi-learning-environment

hanabi_learning_environment is a research platform for Hanabi experiments.
Python
614
star
49

recurrentgemma

Open weights language model from Google DeepMind, based on Griffin.
Python
603
star
50

ai-safety-gridworlds

This is a suite of reinforcement learning environments illustrating various safety properties of intelligent agents.
Python
577
star
51

meltingpot

A suite of test scenarios for multi-agent reinforcement learning.
Python
576
star
52

ithaca

Restoring and attributing ancient texts using deep neural networks
Jupyter Notebook
547
star
53

dqn

Lua/Torch implementation of DQN (Nature, 2015)
Lua
546
star
54

uncertain_ground_truth

Dermatology ddx dataset, Jax implementations of Monte Carlo conformal prediction, plausibility regions and statistical annotation aggregation from our recent work on uncertain ground truth (TMLR'23 and ArXiv pre-print).
Python
534
star
55

distrax

Python
527
star
56

long-form-factuality

Benchmarking long-form factuality in large language models. Original code for our paper "Long-form factuality in large language models".
Python
526
star
57

surface-distance

Library to compute surface distance based performance metrics for segmentation tasks.
Python
526
star
58

tracr

Python
496
star
59

alphamissense

Python
494
star
60

dsprites-dataset

Dataset to assess the disentanglement properties of unsupervised learning methods
Jupyter Notebook
476
star
61

narrativeqa

This repository contains the NarrativeQA dataset. It includes the list of documents with Wikipedia summaries, links to full stories, and questions and answers.
Shell
452
star
62

clrs

Jupyter Notebook
444
star
63

lab2d

A customisable 2D platform for agent-based AI research
C++
420
star
64

alphastar

Python
403
star
65

dm_pix

PIX is an image processing library in JAX, for JAX.
Python
386
star
66

opro

official code for "Large Language Models as Optimizers"
Python
383
star
67

mathematics_conjectures

Jupyter Notebook
367
star
68

spriteworld

Spriteworld: a flexible, configurable python-based reinforcement learning environment
Python
367
star
69

torax

TORAX: Tokamak transport simulation in JAX
Python
361
star
70

dm_env

A Python interface for reinforcement learning environments
Python
343
star
71

dm_robotics

Libraries, tools and tasks created and used at DeepMind Robotics.
Python
341
star
72

spiral

We provide a pre-trained model for unconditional 19-step generation of CelebA-HQ images
C++
327
star
73

launchpad

Python
310
star
74

leo

Implementation of Meta-Learning with Latent Embedding Optimization
Python
302
star
75

enn

Python
291
star
76

streetlearn

A C++/Python implementation of the StreetLearn environment based on images from Street View, as well as a TensorFlow implementation of goal-driven navigation agents solving the task published in “Learning to Navigate in Cities Without a Map”, NeurIPS 2018
C++
285
star
77

gqn-datasets

Datasets used to train Generative Query Networks (GQNs) in the ‘Neural Scene Representation and Rendering’ paper.
Python
269
star
78

treescope

An interactive HTML pretty-printer for machine learning research in IPython notebooks.
Python
256
star
79

multi_object_datasets

Multi-object image datasets with ground-truth segmentation masks and generative factors.
Python
254
star
80

AQuA

A algebraic word problem dataset, with multiple choice questions annotated with rationales.
238
star
81

synjax

Python
238
star
82

grid-cells

Implementation of the supervised learning experiments in Vector-based navigation using grid-like representations in artificial agents, as published at https://www.nature.com/articles/s41586-018-0102-6
Python
236
star
83

card2code

A code generation dataset for generating the code that implements Hearthstone and Magic The Gathering card effects.
236
star
84

arnheim

Jupyter Notebook
235
star
85

torch-hdf5

Torch interface to HDF5 library
Lua
234
star
86

kfac-jax

Second Order Optimization and Curvature Estimation with K-FAC in JAX.
Python
231
star
87

dm_memorytasks

A set of 13 diverse machine-learning tasks that require memory to solve.
Python
221
star
88

Temporal-3D-Pose-Kinetics

Exploiting temporal context for 3D human pose estimation in the wild: 3D poses for the Kinetics dataset
Python
218
star
89

dm_alchemy

DeepMind Alchemy task environment: a meta-reinforcement learning benchmark
Python
197
star
90

neural_testbed

Jupyter Notebook
191
star
91

perception_test

Jupyter Notebook
184
star
92

jmp

JMP is a Mixed Precision library for JAX.
Python
183
star
93

neural_networks_chomsky_hierarchy

Neural Networks and the Chomsky Hierarchy
Python
183
star
94

xquad

180
star
95

nanodo

Python
180
star
96

pg19

179
star
97

spectral_inference_networks

Implementation of Spectral Inference Networks, ICLR 2019
Python
165
star
98

barkour_robot

Barkour Robot: Agile Quadruped Robots by Google DeepMind
C++
165
star
99

onetwo

Python
164
star
100

abstract-reasoning-matrices

Progressive matrices dataset, as described in: Measuring abstract reasoning in neural networks (Barrett*, Hill*, Santoro*, Morcos, Lillicrap), ICML2018
162
star