• Stars
    star
    1,097
  • Rank 42,257 (Top 0.9 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created about 2 years ago
  • Updated about 1 month ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[NeurIPS 2023 Spotlight] LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios (awesome MCTS)

LightZero


Twitter PyPI PyPI - Python Version Loc Comments

Code Test Badge Creation Package Release codecov

GitHub Org's stars GitHub stars GitHub forks GitHub commit activity GitHub issues GitHub pulls Contributors GitHub license

LightZero is a lightweight, efficient, and easy-to-understand open-source algorithm toolkits that combines Monte Carlo Tree Search (MCTS) and Deep Reinforcement Learning (RL).

English | 简体中文

Background

The method of combining Monte Carlo Tree Search and Deep Reinforcement Learning represented by AlphaZero and MuZero has achieved superhuman level in various games such as Go and Atari, has also made gratifying progress in scientific fields such as protein structure prediction, matrix multiplication algorithm search, etc. The following is an overview of the historical evolution of the Monte Carlo Tree Search algorithm series: pipeline

Overview

LightZero is an open-source algorithm toolkits that combines MCTS and RL for PyTorch. It provides support for a range of MCTS-based RL algorithms and applications with following advantages:

  • Lightweight.
  • Efficient.
  • Easy-to-understand.

For further details, please refer to Features, Framework Structure and Integrated Algorithms.

LightZero aims to promote the standardization of the MCTS+RL algorithm family to accelerate related research and applications. A performance comparison of all implemented algorithms under a unified framework is presented in the Benchmark.

Outline

Features

Lightweight: LightZero integrates multiple MCTS algorithm families and can solve decision-making problems with various attributes in a lightweight framework. The algorithms and environments LightZero implemented can be found here.

Efficient: LightZero uses mixed heterogeneous computing programming to improve computational efficiency for the most time-consuming part of MCTS algorithms.

Easy-to-understand: LightZero provides detailed documentation and algorithm framework diagrams for all integrated algorithms to help users understand the algorithm's core and compare the differences and similarities between algorithms under the same paradigm. LightZero also provides function call graphs and network structure diagrams for algorithm code implementation, making it easier for users to locate critical code. All the documentations can be found here.

Framework Structure

Image Description 2

The above picture is the framework pipeline of LightZero. We briefly introduce the three core modules below:

Model: Model is used to define the network structure, including the __init__ function for initializing the network structure and the forward function for computing the network's forward propagation.

Policy: Policy defines the way the network is updated and interacts with the environment, including three processes: the learning process, the collecting process, and the evaluation process.

MCTS: MCTS defines the structure of the Monte Carlo search tree and the way it interacts with the Policy. The implementation of MCTS includes two languages: Python and C++, implemented in ptree and ctree, respectively.

For the file structure of LightZero, please refer to lightzero_file_structure.

Integrated Algorithms

LightZero is a library with a PyTorch implementation of MCTS algorithms (sometimes combined with cython and cpp), including:

The environments and algorithms currently supported by LightZero are shown in the table below:

Env./Alg. AlphaZero MuZero EfficientZero Sampled EfficientZero Gumbel MuZero
Atari ---
TicTacToe 🔒 🔒
Gomoku 🔒 🔒
Go 🔒 🔒 🔒 🔒 🔒
LunarLander ---
BipedalWalker --- 🔒
CartPole ---
Pendulum ---
MuJoCo --- 🔒 🔒 🔒

(1): "" means that the corresponding item is finished and well-tested.

(2): "🔒" means that the corresponding item is in the waitinglist (Work In Progress).

(3): "---" means that this algorithm doesn't support this environment.

Installation

You can install latest LightZero in development from the GitHub source codes with the following command:

git clone https://github.com/opendilab/LightZero.git
cd LightZero
pip3 install -e .

Quick Start

Train a MuZero agent to play CartPole:

cd LightZero
python3 -u zoo/classic_control/cartpole/config/cartpole_muzero_config.py

Train a MuZero agent to play Pong:

cd LightZero
python3 -u zoo/atari/config/atari_muzero_config.py

Train a MuZero agent to play TicTacToe:

cd LightZero
python3 -u zoo/board_games/tictactoe/config/tictactoe_muzero_bot_mode_config.py

Benchmark

Click to collapse

tictactoe_bot-mode_main gomoku_bot-mode_main

pong_main qbert_main mspacman_main mspacman_sez_K

"Factored Policy" indicates that the agent learns a policy network that outputs a categorical distribution. After manual discretization, the dimensions of the action space for the five environments are 11, 49 (7^2), 256 (4^4), 64 (4^3), and 4096 (4^6), respectively. On the other hand, "Gaussian Policy" refers to the agent learning a policy network that directly outputs parameters (mu and sigma) for a Gaussian distribution.

pendulum_main pendulum_sez_K lunarlander_main

bipedalwalker_main hopper_main walker2d_main

pong_gmz_ns mspacman_gmz_ns gomoku_bot-mode_gmz_ns lunarlander_gmz_ns

Awesome-MCTS Notes

Paper Notes

The following are the detailed paper notes (in Chinese) of the above algorithms:

Click to collapse

Algo. Overview

The following are the overview MCTS principle diagrams of the above algorithms:

Click to expand

Awesome-MCTS Papers

Here is a collection of research papers about Monte Carlo Tree Search. This Section will be continuously updated to track the frontier of MCTS.

Key Papers

Click to expand

LightZero Implemented series

AlphaGo series

MuZero series

MCTS Analysis

MCTS Application

Other Papers

Click to expand

ICML

ICLR

NeurIPS

Other Conference or Journal

Feedback and Contribution

  • File an issue on Github

  • Contact our email ([email protected])

  • We appreciate all the feedbacks and contributions to improve LightZero, both algorithms and system designs.

Citation

@misc{lightzero,
    title={{LightZero: OpenDILab} A lightweight and efficient toolkit designed for the MCTS, AlphaZero, and MuZero family of algorithms.},
    author={LightZero Contributors},
    publisher = {GitHub},
    howpublished = {\url{https://github.com/opendilab/LightZero}},
    year={2023},
}

Acknowledgments

This repo is partially based on the following repo, many thanks to their pioneering work:

Special thanks to @PaParaZz1, @karroyan, @nighood, @jayyoung0802, @timothijoe, @TuTuHuss, @puyuan1996, @HansBug for their contributions and support.

License

All code within this repository is under Apache License 2.0.

More Repositories

1

awesome-RLHF

A curated list of reinforcement learning with human feedback resources (continually updated)
3,262
star
2

DI-engine

OpenDILab Decision AI Engine. The Most Comprehensive Reinforcement Learning Framework B.P.
Python
3,041
star
3

PPOxFamily

PPO x Family DRL Tutorial Course(决策智能入门级公开课:8节课帮你盘清算法理论,理顺代码逻辑,玩转决策AI应用实践 )
Python
1,875
star
4

DI-star

An artificial intelligence platform for the StarCraft II with large-scale distributed training and grand-master agents.
Python
1,215
star
5

awesome-model-based-RL

A curated list of awesome model based RL resources (continually updated)
851
star
6

awesome-diffusion-model-in-rl

A curated list of Diffusion Model in RL resources (continually updated)
739
star
7

awesome-decision-transformer

A curated list of Decision Transformer resources (continually updated)
671
star
8

LMDrive

[CVPR 2024] LMDrive: Closed-Loop End-to-End Driving with Large Language Models
Jupyter Notebook
592
star
9

DI-drive

Decision Intelligence Platform for Autonomous Driving simulation.
Python
563
star
10

InterFuser

[CoRL 2022] InterFuser: Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer
Python
522
star
11

LLMRiddles

Open-Source Reproduction/Demo of the LLM Riddles Game
Python
515
star
12

GoBigger

[ICLR 2023] Come & try Decision-Intelligence version of "Agar"! Gobigger could also help you with multi-agent decision intelligence study.
Python
459
star
13

DI-sheep

羊了个羊 + 深度强化学习(Deep Reinforcement Learning + 3 Tiles Game)
Python
416
star
14

awesome-end-to-end-autonomous-driving

A curated list of awesome End-to-End Autonomous Driving resources (continually updated)
371
star
15

awesome-multi-modal-reinforcement-learning

A curated list of Multi-Modal Reinforcement Learning resources (continually updated)
367
star
16

awesome-exploration-rl

A curated list of awesome exploration RL resources (continually updated)
365
star
17

SO2

[AAAI2024] A Perspective of Q-value Estimation on Offline-to-Online Reinforcement Learning
Python
285
star
18

DI-engine-docs

DI-engine docs (Chinese and English)
Python
281
star
19

DI-orchestrator

OpenDILab RL Kubernetes Custom Resource and Operator Lib
Go
240
star
20

DI-smartcross

Decision Intelligence platform for Traffic Crossing Signal Control
Python
230
star
21

treevalue

Here are the most awesome tree structure computing solutions, make your life easier. (这里有目前性能最优的树形结构计算解决方案)
Python
228
star
22

DI-hpc

OpenDILab RL HPC OP Lib, including CUDA and Triton kernel
Python
222
star
23

awesome-AI-based-protein-design

A collection of research papers for AI-based protein design
216
star
24

ACE

[AAAI 2023] Official PyTorch implementation of paper "ACE: Cooperative Multi-agent Q-learning with Bidirectional Action-Dependency".
Python
212
star
25

DI-treetensor

Let DI-treetensor help you simplify the structure processing!(树形运算一不小心就逻辑混乱?DI-treetensor快速帮你搞定)
Python
202
star
26

GoBigger-Challenge-2021

Interested in multi-agents? The 1st Go-Bigger Multi-Agent Decision Intelligence Challenge is coming and a big bonus is waiting for you!
Python
195
star
27

Gobigger-Explore

Still struggling with the high threshold or looking for the appropriate baseline? Come here and new starters can also play with your own multi-agents!
Python
185
star
28

DI-store

OpenDILab RL Object Store
Go
177
star
29

LightTuner

Python
173
star
30

DOS

[CVPR 2023] ReasonNet: End-to-End Driving with Temporal and Global Reasoning
Python
145
star
31

DI-toolkit

A simple toolkit package for opendilab
Python
113
star
32

DI-bioseq

Decision Intelligence platform for Biological Sequence Searching
Python
111
star
33

DI-1024

1024 + 深度强化学习(Deep Reinforcement Learning + 1024 Game/ 2048 Game)
Python
109
star
34

SmartRefine

[CVPR 2024] SmartRefine: A Scenario-Adaptive Refinement Framework for Efficient Motion Prediction
Python
107
star
35

DIgging

Decision Intelligence for digging best parameters in target environment.
Python
90
star
36

awesome-driving-behavior-prediction

A collection of research papers for Driving Behavior Prediction
77
star
37

PsyDI

PsyDI: Towards a Personalized and Progressively In-depth Chatbot for Psychological Measurements. (e.g. MBTI Measurement Agent)
TypeScript
70
star
38

DI-adventure

Decision Intelligence Adventure for Beginners
Python
68
star
39

GenerativeRL

Python library for solving reinforcement learning (RL) problems using generative models (e.g. Diffusion Models).
Python
48
star
40

huggingface_ding

Auxiliary code for pulling, loading reinforcement learning models based on DI-engine from the Huggingface Hub, or pushing them onto Huggingface Hub with auto-created model card.
Python
46
star
41

CodeMorpheus

CodeMorpheus: Generate code self-portraits with one click(一键生成代码自画像,决策型 AI + 生成式 AI)
Python
45
star
42

OpenPaL

Building open-ended embodied agent in battle royale FPS game
33
star
43

awesome-ui-agents

A curated list of of awesome UI agents resources, encompassing Web, App, OS, and beyond (continually updated)
31
star
44

.github

The first decision intelligence platform covering the most complete algorithms in academia and industry
19
star
45

CleanS2S

High-quality and streaming Speech-to-Speech interactive agent in a single file. 只用一个文件实现的流式全双工语音交互原型智能体!
1
star