• Stars
    star
    100
  • Rank 340,703 (Top 7 %)
  • Language
    Python
  • License
    MIT License
  • Created over 6 years ago
  • Updated over 5 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Ape-X DQN & DDPG with pytorch & tensorboard

Distributed Deep Reinforcement Learning with

pytorch & tensorboard


  • Sample on-line plotting while training a Distributed DQN agent on Pong (nstep means lookahead this many steps when bootstraping the target q values):
    • blue: num_actors=2, nstep=1
    • orange: num_actors=8, nstep=1
    • grey: num_actors=8, nstep=5

dqn_pong


What is included?

This repo currently contains the following agents:

  • Distributed DQN [1]
  • Distributed DDPG [2]

Code structure:

NOTE: we follow the same code structure as pytorch-rl& pytorch-dnc.

  • ./utils/factory.py

We suggest the users refer to ./utils/factory.py, where we list all the integrated Env, Model, Memory, Agent into Dict's. All of those four core classes are implemented in ./core/. The factory pattern in ./utils/factory.py makes the code super clean, as no matter what type of Agent you want to train, or which type of Env you want to train on, all you need to do is to simply modify some parameters in ./utils/options.py, then the ./main.py will do it all (NOTE: this ./main.py file never needs to be modified).

  • ./core/single_processes/.

Each agent contains 4 types of single_process's:

  • Logger: plot Global/Actor/Learner/EvaluatorLogs onto tensorboard
  • Actor: collect experiences from Env and push to a global shared Memory
  • Learner: samples from the global shared Memory and do DRL updates on the Model
  • Evaluator: evaluate the Model during training

How to run:

You only need to modify some parameters in ./utils/options.py to train a new configuration.

  • Configure your training in ./utils/options.py:
  • line 13: add an entry into CONFIGS to define your training (agent_type, env_type, game, memory_type, model_type)
  • line 23: choose the entry ID you just added
  • line 19-20: fill in your machine/cluster ID (MACHINE) and timestamp (TIMESTAMP) to define your training signature (MACHINE_TIMESTAMP), the corresponding model file of this training will be saved under this signature (./models/MACHINE_TIMESTAMP.pth ). Also the tensorboard visualization will be displayed under this signature (first activate the tensorboard server by type in bash: tensorboard --logdir logs/, then open this address in your browser: http://localhost:6006/)
  • line 22: to train a model, set mode=1 (training visualization will be under http://localhost:6006/); to test the model of this current training, all you need to do is to set mode=2 .
  • Run:

python main.py


Dependencies:


Repos we referred to during the development of this repo:

This repo is developed together w/ @onlytailei.