• Stars
    star
    1,160
  • Rank 40,222 (Top 0.8 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 8 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

ChainerRL is a deep reinforcement learning library built on top of Chainer.

ChainerRL and PFRL

Build Status Coverage Status Documentation Status PyPI

ChainerRL (this repository) is a deep reinforcement learning library that implements various state-of-the-art deep reinforcement algorithms in Python using Chainer, a flexible deep learning framework. PFRL is the PyTorch analog of ChainerRL.

Breakout Humanoid Grasping Atlas

Installation

ChainerRL is tested with 3.6. For other requirements, see requirements.txt.

ChainerRL can be installed via PyPI:

pip install chainerrl

It can also be installed from the source code:

python setup.py install

Refer to Installation for more information on installation.

Getting started

You can try ChainerRL Quickstart Guide first, or check the examples ready for Atari 2600 and Open AI Gym.

For more information, you can refer to ChainerRL's documentation.

Algorithms

Algorithm Discrete Action Continous Action Recurrent Model Batch Training CPU Async Training
DQN (including DoubleDQN etc.) βœ“ βœ“ (NAF) βœ“ βœ“ x
Categorical DQN βœ“ x βœ“ βœ“ x
Rainbow βœ“ x βœ“ βœ“ x
IQN βœ“ x βœ“ βœ“ x
DDPG x βœ“ βœ“ βœ“ x
A3C βœ“ βœ“ βœ“ βœ“ (A2C) βœ“
ACER βœ“ βœ“ βœ“ x βœ“
NSQ (N-step Q-learning) βœ“ βœ“ (NAF) βœ“ x βœ“
PCL (Path Consistency Learning) βœ“ βœ“ βœ“ x βœ“
PPO βœ“ βœ“ βœ“ βœ“ x
TRPO βœ“ βœ“ βœ“ βœ“ x
TD3 x βœ“ x βœ“ x
SAC x βœ“ x βœ“ x

Following algorithms have been implemented in ChainerRL:

Following useful techniques have been also implemented in ChainerRL:

Visualization

ChainerRL has a set of accompanying visualization tools in order to aid developers' ability to understand and debug their RL agents. With this visualization tool, the behavior of ChainerRL agents can be easily inspected from a browser UI.

Environments

Environments that support the subset of OpenAI Gym's interface (reset and step methods) can be used.

Contributing

Any kind of contribution to ChainerRL would be highly appreciated! If you are interested in contributing to ChainerRL, please read CONTRIBUTING.md.

License

MIT License.

Citations

To cite ChainerRL in publications, please cite our JMLR paper:

@article{JMLR:v22:20-376,
  author  = {Yasuhiro Fujita and Prabhat Nagarajan and Toshiki Kataoka and Takahiro Ishikawa},
  title   = {ChainerRL: A Deep Reinforcement Learning Library},
  journal = {Journal of Machine Learning Research},
  year    = {2021},
  volume  = {22},
  number  = {77},
  pages   = {1-14},
  url     = {http://jmlr.org/papers/v22/20-376.html}
}