• Stars
    star
    924
  • Rank 49,426 (Top 1.0 %)
  • Language
    Python
  • License
    MIT License
  • Created over 4 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

An offline deep reinforcement learning library

d3rlpy: An offline deep reinforcement learning library

test Documentation Status codecov Maintainability MIT

d3rlpy is an offline deep reinforcement learning library for practitioners and researchers.

import d3rlpy

dataset, env = d3rlpy.datasets.get_dataset("hopper-medium-v0")

# prepare algorithm
sac = d3rlpy.algos.SACConfig().create(device="cuda:0")

# train offline
sac.fit(dataset, n_steps=1000000)

# train online
sac.fit_online(env, n_steps=1000000)

# ready to control
actions = sac.predict(x)

⚠️ v2.x.x introduces breaking changes. If you still stick to v1.x.x, please explicitly install previous versions (e.g. pip install d3rlpy==1.1.1).

Key features

⚑ Most Practical RL Library Ever

  • offline RL: d3rlpy supports state-of-the-art offline RL algorithms. Offline RL is extremely powerful when the online interaction is not feasible during training (e.g. robotics, medical).
  • online RL: d3rlpy also supports conventional state-of-the-art online training algorithms without any compromising, which means that you can solve any kinds of RL problems only with d3rlpy.

πŸ”° User-friendly API

  • zero-knowledge of DL library: d3rlpy provides many state-of-the-art algorithms through intuitive APIs. You can become a RL engineer even without knowing how to use deep learning libraries.
  • extensive documentation: d3rlpy is fully documented and accompanied with tutorials and reproduction scripts of the original papers.

πŸš€ Beyond State-of-the-art

  • distributional Q function: d3rlpy is the first library that supports distributional Q functions in the all algorithms. The distributional Q function is known as the very powerful method to achieve the state-of-the-performance.

Installation

d3rlpy supports Linux, macOS and Windows.

PyPI (recommended)

PyPI version PyPI - Downloads

$ pip install d3rlpy

Anaconda

Anaconda-Server Badge Anaconda-Server Badge

$ conda install conda-forge/noarch::d3rlpy

Docker

Docker Pulls

$ docker run -it --gpus all --name d3rlpy takuseno/d3rlpy:latest bash

Supported algorithms

algorithm discrete control continuous control offline RL?
Behavior Cloning (supervised learning) βœ… βœ…
Neural Fitted Q Iteration (NFQ) βœ… β›” βœ…
Deep Q-Network (DQN) βœ… β›”
Double DQN βœ… β›”
Deep Deterministic Policy Gradients (DDPG) β›” βœ…
Twin Delayed Deep Deterministic Policy Gradients (TD3) β›” βœ…
Soft Actor-Critic (SAC) βœ… βœ…
Batch Constrained Q-learning (BCQ) βœ… βœ… βœ…
Bootstrapping Error Accumulation Reduction (BEAR) β›” βœ… βœ…
Conservative Q-Learning (CQL) βœ… βœ… βœ…
Advantage Weighted Actor-Critic (AWAC) β›” βœ… βœ…
Critic Reguralized Regression (CRR) β›” βœ… βœ…
Policy in Latent Action Space (PLAS) β›” βœ… βœ…
TD3+BC β›” βœ… βœ…
Implicit Q-Learning (IQL) β›” βœ… βœ…
Decision Transformer 🚧 βœ… βœ…

Supported Q functions

Benchmark results

d3rlpy is benchmarked to ensure the implementation quality. The benchmark scripts are available reproductions directory. The benchmark results are available d3rlpy-benchmarks repository.

Examples

MuJoCo

import d3rlpy

# prepare dataset
dataset, env = d3rlpy.datasets.get_d4rl('hopper-medium-v0')

# prepare algorithm
cql = d3rlpy.algos.CQLConfig().create(device='cuda:0')

# train
cql.fit(
    dataset,
    n_steps=100000,
    evaluators={"environment": d3rlpy.metrics.EnvironmentEvaluator(env)},
)

See more datasets at d4rl.

Atari 2600

import d3rlpy

# prepare dataset (1% dataset)
dataset, env = d3rlpy.datasets.get_atari_transitions(
    'breakout',
    fraction=0.01,
    num_stack=4,
)

# prepare algorithm
cql = d3rlpy.algos.DiscreteCQLConfig(
    observation_scaler=d3rlpy.preprocessing.PixelObservationScaler(),
    reward_scaler=d3rlpy.preprocessing.ClipRewardScaler(-1.0, 1.0),
).create(device='cuda:0')

# start training
cql.fit(
    dataset,
    n_steps=1000000,
    evaluators={"environment": d3rlpy.metrics.EnvironmentEvaluator(env, epsilon=0.001)},
)

See more Atari datasets at d4rl-atari.

Online Training

import d3rlpy
import gym

# prepare environment
env = gym.make('Hopper-v3')
eval_env = gym.make('Hopper-v3')

# prepare algorithm
sac = d3rlpy.algos.SACConfig().create(device='cuda:0')

# prepare replay buffer
buffer = d3rlpy.dataset.create_fifo_replay_buffer(limit=1000000, env=env)

# start training
sac.fit_online(env, buffer, n_steps=1000000, eval_env=eval_env)

Tutorials

Try cartpole examples on Google Colaboratory!

  • offline RL tutorial: Open In Colab
  • online RL tutorial: Open In Colab

More tutorial documentations are available here.

Contributions

Any kind of contribution to d3rlpy would be highly appreciated! Please check the contribution guide.

Community

Channel Link
Issues GitHub Issues

Family projects

Project Description
d4rl-atari A d4rl-style library of Google's Atari 2600 datasets

Roadmap

The roadmap to the future release is available in ROADMAP.md.

Citation

The paper is available here.

@article{d3rlpy,
  author  = {Takuma Seno and Michita Imai},
  title   = {d3rlpy: An Offline Deep Reinforcement Learning Library},
  journal = {Journal of Machine Learning Research},
  year    = {2022},
  volume  = {23},
  number  = {315},
  pages   = {1--20},
  url     = {http://jmlr.org/papers/v23/22-0017.html}
}

Acknowledgement

This work started as a part of Takuma Seno's Ph.D project at Keio University in 2020.

This work is supported by Information-technology Promotion Agency, Japan (IPA), Exploratory IT Human Resources Project (MITOU Program) in the fiscal year 2020.

More Repositories

1

d4rl-pybullet

Datasets for data-driven deep reinforcement learning with PyBullet environments
Python
140
star
2

ppo

Proximal Policy Optimization implementation with TensorFlow
Python
100
star
3

minerva

An out-of-the-box GUI tool for offline deep reinforcement learning
JavaScript
86
star
4

d4rl-atari

Datasets for data-driven deep reinforcement learning with Atari (wrapper for datasets released by Google)
Python
70
star
5

d3rlpy-benchmarks

Benchmark data for d3rlpy
Python
12
star
6

mvc-drl

Cleanest deep reinforcement learning implementation based on Web MVC architecture with complete unit testings
Python
10
star
7

icm

Intrinsic Curiosity Module implementation with TensorFlow
Python
9
star
8

cpp-dqn

Blazingly Fast Implementation of Deep Q-Network in C++ with NNabla
C++
9
star
9

rsvg

Recurrent Stochastic Value Gradient implementation with TensorFlow
Python
8
star
10

android-countrylist

This library is Android library for using country names and 2-alphabet codes
Java
5
star
11

GeoMap

GeoChart view library for Android
Java
4
star
12

miniature

a toy deep learning library written in Rust
Rust
4
star
13

singan-nnabla

SinGAN implementation with NNabla
Python
4
star
14

ddpg

Deep Deterministic Policy Gradient implementation with TensorFlow
Python
3
star
15

a3c

A3C implementation with TensorFlow
Python
3
star
16

configurable-control-gym

Configurable control tasks based on default environments included in OpenAI Gym
Python
3
star
17

a2c

A2C implementation with TensorFlow
Python
3
star
18

dotfiles

Lua
2
star
19

takuseno.github.io

Personal website
JavaScript
2
star
20

beta-vae

beta-VAE implementation with TensorFlow
Python
2
star
21

github-notebook

Markdown editor for GitHub
JavaScript
1
star
22

watchgpu-master

Master sever of GPU visualization
JavaScript
1
star
23

nnabla-mlflow

mlflow utilities for nnabla
Python
1
star
24

nand2tetris

Study codes of "The Elements of Computing Systems"
Hack
1
star
25

dqn-sokushukai

Sample DQN code for ι€ŸηΏ’δΌš in Wantedly
Python
1
star
26

watchgpu-edge

Edge server of GPU visualziation
Python
1
star
27

nsg

News Source Getter
Python
1
star
28

probabilistic_robotics

Study code of Probabilistic Robotics
Jupyter Notebook
1
star
29

unreal

UNREAL implementation with TensorFlow
Python
1
star
30

mvc-drl-nnabla

NNabla implementation of mvc-drl
Python
1
star
31

tensor-bridge

Transfer tensors between PyTorch, Jax and more
Python
1
star