• Stars
    star
    102
  • Rank 334,577 (Top 7 %)
  • Language
    Jupyter Notebook
  • License
    MIT License
  • Created over 4 years ago
  • Updated almost 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

DQN-Atari-Agents: Modularized & Parallel PyTorch implementation of several DQN Agents, i.a. DDQN, Dueling DQN, Noisy DQN, C51, Rainbow, and DRQN

DQN-Atari-Agents

Modularized training of different DQN Algorithms.

This repository contains several Add-ons to the base DQN Algorithm. All versions can be trained from one script and include the option to train from raw pixel or ram digit data. Recently added multiprocessing to run several environments in parallel for faster training.

Following DQN versions are included:

  • DDQN
  • Dueling DDQN

Both can be enhanced with Noisy layer, Per (Prioritized Experience Replay), Multistep Targets and be trained in a Categorical version (C51). Combining all these add-ons will lead to the state-of-the-art Algorithm of value-based methods called: Rainbow.

Planned Add-ons:

  • Parallel Environments for faster training (wall clock time) [X]
  • Munchausen RL [ ]
  • DRQN (recurrent DQN) [ ]
  • Soft-DQN [ ]
  • Curiosity Exploration [X] currently only for DQN

Train your Agent:

Dependencies

Trained and tested on:

Python 3.6 
PyTorch 1.4.0  
Numpy 1.15.2 
gym 0.10.11 

To train the base DDQN simply run python run_atari_dqn.py To train and modify your own Atari Agent the following inputs are optional:

example: python run_atari_dqn.py -env BreakoutNoFrameskip-v4 -agent dueling -u 1 -eps_frames 100000 -seed 42 -info Breakout_run1

  • agent: Specify which type of DQN agent you want to train, default is DQN - baseline! Following agent inputs are currently possible: dqn, dqn+per, noisy_dqn, noisy_dqn+per, dueling, dueling+per, noisy_dueling, noisy_dueling+per, c51, c51+per, noisy_c51, noisy_c51+per, duelingc51, duelingc51+per, noisy_duelingc51, noisy_duelingc51+per, rainbow
  • env: Name of the atari Environment, default = PongNoFrameskip-v4
  • frames: Number of frames to train, default = 5 mio
  • seed: Random seed to reproduce training runs, default = 1
  • bs: Batch size for updating the DQN, default = 32
  • layer_size: Size of the hidden layer, default=512
  • n_step: Number of steps for the multistep DQN Targets
  • eval_every, Evaluate every x frames, default = 50000
  • eval_runs, Number of evaluation runs, default = 5
  • m: Replay memory size, default = 1e5
  • lr: Learning rate, default = 0.00025
  • g: Discount factor gamma, default = 0.99
  • t: Soft update parameter tat, default = 1e-3
  • eps_frames: Linear annealed frames for Epsilon, default = 150000
  • min_eps: Epsilon greedy annealing crossing point. Fast annealing until this point, from there slowly to 0 until the last frame, default = 0.1
  • ic, --intrinsic_curiosity, Adding intrinsic curiosity to the extrinsic reward. 0 - only reward and no curiosity, 1 - reward and curiosity, 2 - only curiosity, default = 0.
  • info: Name of the training run.
  • fill_buffer: Adding samples to the replay buffer based on a random policy, before agent-env-interaction. Input numer of preadded frames to the buffer, default = 50000
  • save_model: Specify if the trained network shall be saved [1] or not [0], default is 1 - saved!
  • w, --worker: Number of parallel environments

Training Progress can be view with Tensorboard

Just run tensorboard --logdir=runs/

Atari Games Performance:

Pong:

Hyperparameters:

  • batch_size: 32
  • seed: 1
  • layer_size: 512
  • frames: 300000
  • lr: 1e-4
  • m: 10000
  • g: 0.99
  • t: 1e-3
  • eps_frames: 100000
  • min_eps: 0.01
  • fill_buffer: 10000

Pong

Convergence prove for the CartPole Environment

Since training for the Algorithms for Atari takes a lot of time I added a quick convergence prove for the CartPole-v0 environment. You can clearly see that Raibow outperformes the other two methods Dueling DQN and DDQN.

rainbow

To reproduce the results following hyperparameter where used:

  • batch_size: 32
  • seed: 1
  • layer_size: 512
  • frames: 30000
  • lr: 1e-3
  • m: 500000
  • g: 0.99
  • t: 1e-3
  • eps_frames: 1000
  • min_eps: 0.1
  • fill_buffer: 50000

Its interesting to see that the add-ons have a negative impact for the super simple CartPole environment. Still the Dueling DDQN version performs clearly better than the standard DDQN version.

dqn

dueling

Parallel Environments

To reduce wall clock time while training parallel environments are implemented. Following diagrams show the speed improvement for the two envrionments CartPole-v0 and LunarLander-v2. Tested with 1,2,4,6,8,10,16 worker. Each number of worker was tested over 3 seeds.

Convergence behavior for each worker number can be found: CartPole-v0 and LunarLander

Help and issues:

Im open for feedback, found bugs, improvements or anything. Just leave me a message or contact me.

Paper references:

Author

  • Sebastian Dittert

Feel free to use this code for your own projects or research. For citation:

@misc{DQN-Atari-Agents,
  author = {Dittert, Sebastian},
  title = {DQN-Atari-Agents:   Modularized PyTorch implementation of several DQN Agents, i.a. DDQN, Dueling DQN, Noisy DQN, C51, Rainbow and DRQN},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/BY571/DQN-Atari-Agents}},
}

More Repositories

1

Soft-Actor-Critic-and-Extensions

PyTorch implementation of Soft-Actor-Critic and Prioritized Experience Replay (PER) + Emphasizing Recent Experience (ERE) + Munchausen RL + D2RL and parallel Environments.
Python
232
star
2

CQL

PyTorch implementation of the Offline Reinforcement Learning algorithm CQL. Includes the versions DQN-CQL and SAC-CQL for discrete and continuous action spaces.
Python
85
star
3

Upside-Down-Reinforcement-Learning

Upside-Down Reinforcement Learning (β…‚κ“€) implementation in PyTorch. Based on the paper published by JΓΌrgen Schmidhuber.
Jupyter Notebook
71
star
4

Deep-Reinforcement-Learning-Algorithm-Collection

Collection of Deep Reinforcement Learning Algorithms implemented in PyTorch.
Jupyter Notebook
65
star
5

IQN-and-Extensions

PyTorch Implementation of Implicit Quantile Networks (IQN) for Distributional Reinforcement Learning with additional extensions like PER, Noisy layer, N-step bootstrapping, Dueling architecture and parallel env support.
Jupyter Notebook
65
star
6

Munchausen-RL

PyTorch implementation of the Munchausen Reinforcement Learning Algorithms M-DQN and M-IQN
Jupyter Notebook
36
star
7

SAC_discrete

PyTorch implementation of the discrete Soft-Actor-Critic algorithm.
Python
31
star
8

Implicit-Q-Learning

PyTorch implementation of the implicit Q-learning algorithm (IQL)
Python
30
star
9

FQF-and-Extensions

PyTorch implementation of the state-of-the-art distributional reinforcement learning algorithm Fully Parameterized Quantile Function (FQF) and Extensions: N-step Bootstrapping, PER, Noisy Layer, Dueling Networks, and parallelization.
Jupyter Notebook
24
star
10

QR-DQN

PyTorch implementation of QR-DQN: Distributional Reinforcement Learning with Quantile Regression
Jupyter Notebook
22
star
11

Normalized-Advantage-Function-NAF-

PyTorch implementation of the Q-Learning Algorithm Normalized Advantage Function for continuous control problems + PER and N-step Method
Jupyter Notebook
20
star
12

Randomized-Ensembled-Double-Q-learning-REDQ-

Pytorch implementation of Randomized Ensembled Double Q-learning (REDQ)
Jupyter Notebook
18
star
13

D4PG

PyTorch implementation of D4PG with the SOTA IQN Critic instead of C51. Implementation includes also the extensions Munchausen RL and D2RL which can be added to D4PG to improve its performance.
Python
12
star
14

GANs

ClusterGAN PyTorch implementation
Jupyter Notebook
11
star
15

Medium_Code_Examples

Implementation of fundamental concepts and algorithms for reinforcement learning
Jupyter Notebook
11
star
16

OFENet

Jupyter Notebook
10
star
17

Genetic-Algorithms-Neural-Network-Optimization

Genetic Algorithm for Neural Network Architecture and Hyperparameter Optimization and Neural Network Weight Optimization with Genetic Algorithm
Jupyter Notebook
10
star
18

GARNE-Genetic-Algorithm-with-Recurrent-Network-and-Novelty-Exploration

GARNE: Genetic-Algorithm-with-Recurrent-Network-and-Novelty-Exploration
Python
7
star
19

MBPO

Python
6
star
20

Hindsight-Experience-Replay

Jupyter Notebook
4
star
21

D4PG-ray

Distributed PyTorch implementation of D4PG with ray. Using a SOTA IQN Critic instead of C51. Implementation includes also the extensions Munchausen RL and D2RL which can be added to D4PG to improve its performance.
Python
4
star
22

pytorch-vmpo

PyTorch implementation of V-MPO
Python
3
star
23

PETS-MPC

Python
3
star
24

RA-PPO

PyTorch implementation of Risk-Averse Policy Learning
Python
3
star
25

Udacity-DRL-Nanodegree-P3-Multiagent-RL-

Multi-Agent-RL Competition on Unitys Tennis Environment
ASP
2
star
26

CEN-Network

Jupyter Notebook
2
star
27

TD3-and-Extensions

PyTorch implementation of Twin Delayed Deep Deterministic Policy Gradient (TD3) - including additional Extension to improve the algorithm's performance.
Python
1
star
28

DRQN

Jupyter Notebook
1
star