• Stars
    star
    3,041
  • Rank 14,828 (Top 0.3 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 3 years ago
  • Updated about 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

OpenDILab Decision AI Engine. The Most Comprehensive Reinforcement Learning Framework B.P.

Twitter PyPI Conda Conda update PyPI - Python Version PyTorch Version

Loc Comments

Style Read en Docs Read zh_CN Docs Unittest Algotest deploy codecov

GitHub Org's stars GitHub stars GitHub forks GitHub commit activity GitHub issues GitHub pulls Contributors GitHub license Hugging Face Open in OpenXLab

Updated on 2023.08.23 DI-engine-v0.4.9

Introduction to DI-engine

Documentation | 中文文档 | Tutorials | Feature | Task & Middleware | TreeTensor | Roadmap

DI-engine is a generalized decision intelligence engine for PyTorch and JAX.

It provides python-first and asynchronous-native task and middleware abstractions, and modularly integrates several of the most important decision-making concepts: Env, Policy and Model. Based on the above mechanisms, DI-engine supports various deep reinforcement learning algorithms with superior performance, high efficiency, well-organized documentation and unittest:

  • Most basic DRL algorithms: such as DQN, Rainbow, PPO, TD3, SAC, R2D2, IMPALA
  • Multi-agent RL algorithms: such as QMIX, WQMIX, MAPPO, HAPPO, ACE
  • Imitation learning algorithms (BC/IRL/GAIL): such as GAIL, SQIL, Guided Cost Learning, Implicit BC
  • Offline RL algorithms: BCQ, CQL, TD3BC, Decision Transformer, EDAC
  • Model-based RL algorithms: SVG, STEVE, MBPO, DDPPO, DreamerV3, MuZero
  • Exploration algorithms: HER, RND, ICM, NGU
  • Other algorithims: such as PER, PLR, PCGrad

DI-engine aims to standardize different Decision Intelligence environments and applications, supporting both academic research and prototype applications. Various training pipelines and customized decision AI applications are also supported:

(Click to Collapse)

On the low-level end, DI-engine comes with a set of highly re-usable modules, including RL optimization functions, PyTorch utilities and auxiliary tools.

BTW, DI-engine also has some special system optimization and design for efficient and robust large-scale RL training:

(Click for Details)

Have fun with exploration and exploitation.

Outline

Installation

You can simply install DI-engine from PyPI with the following command:

pip install DI-engine

If you use Anaconda or Miniconda, you can install DI-engine from conda-forge through the following command:

conda install -c opendilab di-engine

For more information about installation, you can refer to installation.

And our dockerhub repo can be found here,we prepare base image and env image with common RL environments.

(Click for Details)
  • base: opendilab/ding:nightly
  • rpc: opendilab/ding:nightly-rpc
  • atari: opendilab/ding:nightly-atari
  • mujoco: opendilab/ding:nightly-mujoco
  • dmc: opendilab/ding:nightly-dmc2gym
  • metaworld: opendilab/ding:nightly-metaworld
  • smac: opendilab/ding:nightly-smac
  • grf: opendilab/ding:nightly-grf
  • cityflow: opendilab/ding:nightly-cityflow
  • evogym: opendilab/ding:nightly-evogym
  • d4rl: opendilab/ding:nightly-d4rl

The detailed documentation are hosted on doc | 中文文档.

Quick Start

3 Minutes Kickoff

3 Minutes Kickoff (colab)

How to migrate a new RL Env | 如何迁移一个新的强化学习环境

How to customize the neural network model | 如何定制策略使用的神经网络模型

测试/部署 强化学习策略 的样例

Feature

Algorithm Versatility

(Click to Collapse)

discrete  discrete means discrete action space, which is only label in normal DRL algorithms (1-23)

continuous  means continuous action space, which is only label in normal DRL algorithms (1-23)

hybrid  means hybrid (discrete + continuous) action space (1-23)

dist  Distributed Reinforcement Learning分布式强化学习

MARL  Multi-Agent Reinforcement Learning多智能体强化学习

exp  Exploration Mechanisms in Reinforcement Learning强化学习中的探索机制

IL  Imitation Learning模仿学习

offline  Offiline Reinforcement Learning离线强化学习

mbrl  Model-Based Reinforcement Learning基于模型的强化学习

other  means other sub-direction algorithms, usually as plugin-in in the whole pipeline

P.S: The .py file in Runnable Demo can be found in dizoo

No. Algorithm Label Doc and Implementation Runnable Demo
1 DQN discrete DQN doc
DQN中文文档
policy/dqn
python3 -u cartpole_dqn_main.py / ding -m serial -c cartpole_dqn_config.py -s 0
2 C51 discrete C51 doc
policy/c51
ding -m serial -c cartpole_c51_config.py -s 0
3 QRDQN discrete QRDQN doc
policy/qrdqn
ding -m serial -c cartpole_qrdqn_config.py -s 0
4 IQN discrete IQN doc
policy/iqn
ding -m serial -c cartpole_iqn_config.py -s 0
5 FQF discrete FQF doc
policy/fqf
ding -m serial -c cartpole_fqf_config.py -s 0
6 Rainbow discrete Rainbow doc
policy/rainbow
ding -m serial -c cartpole_rainbow_config.py -s 0
7 SQL discretecontinuous SQL doc
policy/sql
ding -m serial -c cartpole_sql_config.py -s 0
8 R2D2 distdiscrete R2D2 doc
policy/r2d2
ding -m serial -c cartpole_r2d2_config.py -s 0
9 PG discrete PG doc
policy/pg
ding -m serial -c cartpole_pg_config.py -s 0
10 A2C discrete A2C doc
policy/a2c
ding -m serial -c cartpole_a2c_config.py -s 0
11 PPO/MAPPO discretecontinuousMARL PPO doc
policy/ppo
python3 -u cartpole_ppo_main.py / ding -m serial_onpolicy -c cartpole_ppo_config.py -s 0
12 PPG discrete PPG doc
policy/ppg
python3 -u cartpole_ppg_main.py
13 ACER discretecontinuous ACER doc
policy/acer
ding -m serial -c cartpole_acer_config.py -s 0
14 IMPALA distdiscrete IMPALA doc
policy/impala
ding -m serial -c cartpole_impala_config.py -s 0
15 DDPG/PADDPG continuoushybrid DDPG doc
policy/ddpg
ding -m serial -c pendulum_ddpg_config.py -s 0
16 TD3 continuoushybrid TD3 doc
policy/td3
python3 -u pendulum_td3_main.py / ding -m serial -c pendulum_td3_config.py -s 0
17 D4PG continuous D4PG doc
policy/d4pg
python3 -u pendulum_d4pg_config.py
18 SAC/[MASAC] discretecontinuousMARL SAC doc
policy/sac
ding -m serial -c pendulum_sac_config.py -s 0
19 PDQN hybrid policy/pdqn ding -m serial -c gym_hybrid_pdqn_config.py -s 0
20 MPDQN hybrid policy/pdqn ding -m serial -c gym_hybrid_mpdqn_config.py -s 0
21 HPPO hybrid policy/ppo ding -m serial_onpolicy -c gym_hybrid_hppo_config.py -s 0
22 BDQ hybrid policy/bdq python3 -u hopper_bdq_config.py
23 MDQN discrete policy/mdqn python3 -u asterix_mdqn_config.py
24 QMIX MARL QMIX doc
policy/qmix
ding -m serial -c smac_3s5z_qmix_config.py -s 0
25 COMA MARL COMA doc
policy/coma
ding -m serial -c smac_3s5z_coma_config.py -s 0
26 QTran MARL policy/qtran ding -m serial -c smac_3s5z_qtran_config.py -s 0
27 WQMIX MARL WQMIX doc
policy/wqmix
ding -m serial -c smac_3s5z_wqmix_config.py -s 0
28 CollaQ MARL CollaQ doc
policy/collaq
ding -m serial -c smac_3s5z_collaq_config.py -s 0
29 MADDPG MARL MADDPG doc
policy/ddpg
ding -m serial -c ant_maddpg_config.py -s 0
30 GAIL IL GAIL doc
reward_model/gail
ding -m serial_gail -c cartpole_dqn_gail_config.py -s 0
31 SQIL IL SQIL doc
entry/sqil
ding -m serial_sqil -c cartpole_sqil_config.py -s 0
32 DQFD IL DQFD doc
policy/dqfd
ding -m serial_dqfd -c cartpole_dqfd_config.py -s 0
33 R2D3 IL R2D3 doc
R2D3中文文档
policy/r2d3
python3 -u pong_r2d3_r2d2expert_config.py
34 Guided Cost Learning IL Guided Cost Learning中文文档
reward_model/guided_cost
python3 lunarlander_gcl_config.py
35 TREX IL TREX doc
reward_model/trex
python3 mujoco_trex_main.py
36 Implicit Behavorial Cloning (DFO+MCMC) IL policy/ibc
model/template/ebm
python3 d4rl_ibc_main.py -s 0 -c pen_human_ibc_mcmc_config.py
37 BCO IL entry/bco python3 -u cartpole_bco_config.py
38 HER exp HER doc
reward_model/her
python3 -u bitflip_her_dqn.py
39 RND exp RND doc
reward_model/rnd
python3 -u cartpole_rnd_onppo_config.py
40 ICM exp ICM doc
ICM中文文档
reward_model/icm
python3 -u cartpole_ppo_icm_config.py
41 CQL offline CQL doc
policy/cql
python3 -u d4rl_cql_main.py
42 TD3BC offline TD3BC doc
policy/td3_bc
python3 -u d4rl_td3_bc_main.py
43 Decision Transformer offline policy/dt python3 -u ding/example/dt.py
44 EDAC offline EDAC doc
policy/edac
python3 -u d4rl_edac_main.py
45 MBSAC(SAC+MVE+SVG) continuousmbrl policy/mbpolicy/mbsac python3 -u pendulum_mbsac_mbpo_config.py \ python3 -u pendulum_mbsac_ddppo_config.py
46 STEVESAC(SAC+STEVE+SVG) continuousmbrl policy/mbpolicy/mbsac python3 -u pendulum_stevesac_mbpo_config.py
47 MBPO mbrl MBPO doc
world_model/mbpo
python3 -u pendulum_sac_mbpo_config.py
48 DDPPO mbrl world_model/ddppo python3 -u pendulum_mbsac_ddppo_config.py
49 DreamerV3 mbrl world_model/dreamerv3 python3 -u cartpole_balance_dreamer_config.py
50 PER other worker/replay_buffer rainbow demo
51 GAE other rl_utils/gae ppo demo
52 ST-DIM other torch_utils/loss/contrastive_loss ding -m serial -c cartpole_dqn_stdim_config.py -s 0
53 PLR other PLR doc
data/level_replay/level_sampler
python3 -u bigfish_plr_config.py -s 0
54 PCGrad other torch_utils/optimizer_helper/PCGrad python3 -u multi_mnist_pcgrad_main.py -s 0

Environment Versatility

(Click to Collapse)
No Environment Label Visualization Code and Doc Links
1 Atari discrete original dizoo link
env tutorial
环境指南
2 box2d/bipedalwalker continuous original dizoo link
env tutorial
环境指南
3 box2d/lunarlander discrete original dizoo link
env tutorial
环境指南
4 classic_control/cartpole discrete original dizoo link
env tutorial
环境指南
5 classic_control/pendulum continuous original dizoo link
env tutorial
环境指南
6 competitive_rl discrete selfplay original dizoo link
环境指南
7 gfootball discretesparseselfplay original dizoo link
env tutorial
环境指南
8 minigrid discretesparse original dizoo link
env tutorial
环境指南
9 MuJoCo continuous original dizoo link
env tutorial
环境指南
10 PettingZoo discrete continuous marl original dizoo link
env tutorial
环境指南
11 overcooked discrete marl original dizoo link
env tutorial
12 procgen discrete original dizoo link
env tutorial
环境指南
13 pybullet continuous original dizoo link
环境指南
14 smac discrete marlselfplaysparse original dizoo link
env tutorial
环境指南
15 d4rl offline ori dizoo link
环境指南
16 league_demo discrete selfplay original dizoo link
17 pomdp atari discrete dizoo link
18 bsuite discrete original dizoo link
env tutorial
环境指南
19 ImageNet IL original dizoo link
环境指南
20 slime_volleyball discreteselfplay ori dizoo link
env tutorial
环境指南
21 gym_hybrid hybrid ori dizoo link
env tutorial
环境指南
22 GoBigger hybridmarlselfplay ori dizoo link
env tutorial
环境指南
23 gym_soccer hybrid ori dizoo link
环境指南
24 multiagent_mujoco continuous marl original dizoo link
环境指南
25 bitflip discrete sparse original dizoo link
环境指南
26 sokoban discrete Game 2 dizoo link
env tutorial
环境指南
27 gym_anytrading discrete original dizoo link
env tutorial
28 mario discrete original dizoo link
env tutorial
环境指南
29 dmc2gym continuous original dizoo link
env tutorial
环境指南
30 evogym continuous original dizoo link
env tutorial
环境指南
31 gym-pybullet-drones continuous original dizoo link
环境指南
32 beergame discrete original dizoo link
环境指南
33 classic_control/acrobot discrete original dizoo link
环境指南
34 box2d/car_racing discrete
continuous
original dizoo link
环境指南
35 metadrive continuous original dizoo link
环境指南
36 cliffwalking discrete original dizoo link
环境指南

discrete means discrete action space

continuous means continuous action space

hybrid means hybrid (discrete + continuous) action space

MARL means multi-agent RL environment

sparse means environment which is related to exploration and sparse reward

offline means offline RL environment

IL means Imitation Learning or Supervised Learning Dataset

selfplay means environment that allows agent VS agent battle

P.S. some enviroments in Atari, such as MontezumaRevenge, are also sparse reward type

General Data Container: TreeTensor

DI-engine utilizes TreeTensor as the basic data container in various components, which is ease of use and consistent across different code modules such as environment definition, data processing and DRL optimization. Here are some concrete code examples:

  • TreeTensor can easily extend all the operations of torch.Tensor to nested data:

    (Click for Details)
    import treetensor.torch as ttorch
    
    
    # create random tensor
    data = ttorch.randn({'a': (3, 2), 'b': {'c': (3, )}})
    # clone+detach tensor
    data_clone = data.clone().detach()
    # access tree structure like attribute
    a = data.a
    c = data.b.c
    # stack/cat/split
    stacked_data = ttorch.stack([data, data_clone], 0)
    cat_data = ttorch.cat([data, data_clone], 0)
    data, data_clone = ttorch.split(stacked_data, 1)
    # reshape
    data = data.unsqueeze(-1)
    data = data.squeeze(-1)
    flatten_data = data.view(-1)
    # indexing
    data_0 = data[0]
    data_1to2 = data[1:2]
    # execute math calculations
    data = data.sin()
    data.b.c.cos_().clamp_(-1, 1)
    data += data ** 2
    # backward
    data.requires_grad_(True)
    loss = data.arctan().mean()
    loss.backward()
    # print shape
    print(data.shape)
    # result
    # <Size 0x7fbd3346ddc0>
    # ├── 'a' --> torch.Size([1, 3, 2])
    # └── 'b' --> <Size 0x7fbd3346dd00>
    #     └── 'c' --> torch.Size([1, 3])
  • TreeTensor can make it simple yet effective to implement classic deep reinforcement learning pipeline

    (Click for Details)
    import torch
    import treetensor.torch as ttorch
    
    B = 4
    
    
    def get_item():
        return {
            'obs': {
                'scalar': torch.randn(12),
                'image': torch.randn(3, 32, 32),
            },
            'action': torch.randint(0, 10, size=(1,)),
            'reward': torch.rand(1),
            'done': False,
        }
    
    
    data = [get_item() for _ in range(B)]
    
    
    # execute `stack` op
    - def stack(data, dim):
    -     elem = data[0]
    -     if isinstance(elem, torch.Tensor):
    -         return torch.stack(data, dim)
    -     elif isinstance(elem, dict):
    -         return {k: stack([item[k] for item in data], dim) for k in elem.keys()}
    -     elif isinstance(elem, bool):
    -         return torch.BoolTensor(data)
    -     else:
    -         raise TypeError("not support elem type: {}".format(type(elem)))
    - stacked_data = stack(data, dim=0)
    + data = [ttorch.tensor(d) for d in data]
    + stacked_data = ttorch.stack(data, dim=0)
    
    # validate
    - assert stacked_data['obs']['image'].shape == (B, 3, 32, 32)
    - assert stacked_data['action'].shape == (B, 1)
    - assert stacked_data['reward'].shape == (B, 1)
    - assert stacked_data['done'].shape == (B,)
    - assert stacked_data['done'].dtype == torch.bool
    + assert stacked_data.obs.image.shape == (B, 3, 32, 32)
    + assert stacked_data.action.shape == (B, 1)
    + assert stacked_data.reward.shape == (B, 1)
    + assert stacked_data.done.shape == (B,)
    + assert stacked_data.done.dtype == torch.bool

Feedback and Contribution

We appreciate all the feedbacks and contributions to improve DI-engine, both algorithms and system designs. And CONTRIBUTING.md offers some necessary information.

Supporters

↳ Stargazers

Stargazers repo roster for @opendilab/DI-engine

↳ Forkers

Forkers repo roster for @opendilab/DI-engine

Citation

@misc{ding,
    title={{DI-engine: OpenDILab} Decision Intelligence Engine},
    author={DI-engine Contributors},
    publisher = {GitHub},
    howpublished = {\url{https://github.com/opendilab/DI-engine}},
    year={2021},
}

License

DI-engine released under the Apache 2.0 license.

More Repositories

1

awesome-RLHF

A curated list of reinforcement learning with human feedback resources (continually updated)
3,262
star
2

PPOxFamily

PPO x Family DRL Tutorial Course(决策智能入门级公开课:8节课帮你盘清算法理论,理顺代码逻辑,玩转决策AI应用实践 )
Python
1,875
star
3

DI-star

An artificial intelligence platform for the StarCraft II with large-scale distributed training and grand-master agents.
Python
1,215
star
4

LightZero

[NeurIPS 2023 Spotlight] LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios (awesome MCTS)
Python
1,097
star
5

awesome-model-based-RL

A curated list of awesome model based RL resources (continually updated)
851
star
6

awesome-diffusion-model-in-rl

A curated list of Diffusion Model in RL resources (continually updated)
739
star
7

awesome-decision-transformer

A curated list of Decision Transformer resources (continually updated)
671
star
8

LMDrive

[CVPR 2024] LMDrive: Closed-Loop End-to-End Driving with Large Language Models
Jupyter Notebook
592
star
9

DI-drive

Decision Intelligence Platform for Autonomous Driving simulation.
Python
563
star
10

InterFuser

[CoRL 2022] InterFuser: Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer
Python
522
star
11

LLMRiddles

Open-Source Reproduction/Demo of the LLM Riddles Game
Python
515
star
12

GoBigger

[ICLR 2023] Come & try Decision-Intelligence version of "Agar"! Gobigger could also help you with multi-agent decision intelligence study.
Python
459
star
13

DI-sheep

羊了个羊 + 深度强化学习(Deep Reinforcement Learning + 3 Tiles Game)
Python
416
star
14

awesome-end-to-end-autonomous-driving

A curated list of awesome End-to-End Autonomous Driving resources (continually updated)
371
star
15

awesome-multi-modal-reinforcement-learning

A curated list of Multi-Modal Reinforcement Learning resources (continually updated)
367
star
16

awesome-exploration-rl

A curated list of awesome exploration RL resources (continually updated)
365
star
17

SO2

[AAAI2024] A Perspective of Q-value Estimation on Offline-to-Online Reinforcement Learning
Python
285
star
18

DI-engine-docs

DI-engine docs (Chinese and English)
Python
281
star
19

DI-orchestrator

OpenDILab RL Kubernetes Custom Resource and Operator Lib
Go
240
star
20

DI-smartcross

Decision Intelligence platform for Traffic Crossing Signal Control
Python
230
star
21

treevalue

Here are the most awesome tree structure computing solutions, make your life easier. (这里有目前性能最优的树形结构计算解决方案)
Python
228
star
22

DI-hpc

OpenDILab RL HPC OP Lib, including CUDA and Triton kernel
Python
222
star
23

awesome-AI-based-protein-design

A collection of research papers for AI-based protein design
216
star
24

ACE

[AAAI 2023] Official PyTorch implementation of paper "ACE: Cooperative Multi-agent Q-learning with Bidirectional Action-Dependency".
Python
212
star
25

DI-treetensor

Let DI-treetensor help you simplify the structure processing!(树形运算一不小心就逻辑混乱?DI-treetensor快速帮你搞定)
Python
202
star
26

GoBigger-Challenge-2021

Interested in multi-agents? The 1st Go-Bigger Multi-Agent Decision Intelligence Challenge is coming and a big bonus is waiting for you!
Python
195
star
27

Gobigger-Explore

Still struggling with the high threshold or looking for the appropriate baseline? Come here and new starters can also play with your own multi-agents!
Python
185
star
28

DI-store

OpenDILab RL Object Store
Go
177
star
29

LightTuner

Python
173
star
30

DOS

[CVPR 2023] ReasonNet: End-to-End Driving with Temporal and Global Reasoning
Python
145
star
31

DI-toolkit

A simple toolkit package for opendilab
Python
113
star
32

DI-bioseq

Decision Intelligence platform for Biological Sequence Searching
Python
111
star
33

DI-1024

1024 + 深度强化学习(Deep Reinforcement Learning + 1024 Game/ 2048 Game)
Python
109
star
34

SmartRefine

[CVPR 2024] SmartRefine: A Scenario-Adaptive Refinement Framework for Efficient Motion Prediction
Python
107
star
35

DIgging

Decision Intelligence for digging best parameters in target environment.
Python
90
star
36

awesome-driving-behavior-prediction

A collection of research papers for Driving Behavior Prediction
77
star
37

PsyDI

PsyDI: Towards a Personalized and Progressively In-depth Chatbot for Psychological Measurements. (e.g. MBTI Measurement Agent)
TypeScript
70
star
38

DI-adventure

Decision Intelligence Adventure for Beginners
Python
68
star
39

GenerativeRL

Python library for solving reinforcement learning (RL) problems using generative models (e.g. Diffusion Models).
Python
48
star
40

huggingface_ding

Auxiliary code for pulling, loading reinforcement learning models based on DI-engine from the Huggingface Hub, or pushing them onto Huggingface Hub with auto-created model card.
Python
46
star
41

CodeMorpheus

CodeMorpheus: Generate code self-portraits with one click(一键生成代码自画像,决策型 AI + 生成式 AI)
Python
45
star
42

OpenPaL

Building open-ended embodied agent in battle royale FPS game
33
star
43

awesome-ui-agents

A curated list of of awesome UI agents resources, encompassing Web, App, OS, and beyond (continually updated)
31
star
44

.github

The first decision intelligence platform covering the most complete algorithms in academia and industry
19
star
45

CleanS2S

High-quality and streaming Speech-to-Speech interactive agent in a single file. 只用一个文件实现的流式全双工语音交互原型智能体!
1
star