• Stars
    star
    1,084
  • Rank 42,668 (Top 0.9 %)
  • Language
    C++
  • License
    Apache License 2.0
  • Created about 3 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

C++-based high-performance parallel environment execution engine (vectorized env) for general RL environments.

PyPI Downloads arXiv Read the Docs Unittest GitHub issues GitHub stars GitHub forks GitHub license

EnvPool is a C++-based batched environment pool with pybind11 and thread pool. It has high performance (~1M raw FPS with Atari games, ~3M raw FPS with Mujoco simulator on DGX-A100) and compatible APIs (supports both gym and dm_env, both sync and async, both single and multi player environment). Currently it supports:

Here are EnvPool's several highlights:

Check out our arXiv paper for more details!

Installation

PyPI

EnvPool is currently hosted on PyPI. It requires Python >= 3.7.

You can simply install EnvPool with the following command:

$ pip install envpool

After installation, open a Python console and type

import envpool
print(envpool.__version__)

If no error occurs, you have successfully installed EnvPool.

From Source

Please refer to the guideline.

Documentation

The tutorials and API documentation are hosted on envpool.readthedocs.io.

The example scripts are under examples/ folder; benchmark scripts are under benchmark/ folder.

Benchmark Results

We perform our benchmarks with ALE Atari environment PongNoFrameskip-v4 (with environment wrappers from OpenAI Baselines) and Mujoco environment Ant-v3 on different hardware setups, including a TPUv3-8 virtual machine (VM) of 96 CPU cores and 2 NUMA nodes, and an NVIDIA DGX-A100 of 256 CPU cores with 8 NUMA nodes. Baselines include 1) naive Python for-loop; 2) the most popular RL environment parallelization execution by Python subprocess, e.g., gym.vector_env; 3) to our knowledge, the fastest RL environment executor Sample Factory before EnvPool.

We report EnvPool performance with sync mode, async mode, and NUMA + async mode, compared with the baselines on different number of workers (i.e., number of CPU cores). As we can see from the results, EnvPool achieves significant improvements over the baselines on all settings. On the high-end setup, EnvPool achieves 1 Million frames per second with Atari and 3 Million frames per second with Mujoco on 256 CPU cores, which is 14.9x / 19.6x of the gym.vector_env baseline. On a typical PC setup with 12 CPU cores, EnvPool's throughput is 3.1x / 2.9x of gym.vector_env.

Atari Highest FPS Laptop (12) Workstation (32) TPU-VM (96) DGX-A100 (256)
For-loop 4,893 7,914 3,993 4,640
Subprocess 15,863 47,699 46,910 71,943
Sample-Factory 28,216 138,847 222,327 707,494
EnvPool (sync) 37,396 133,824 170,380 427,851
EnvPool (async) 49,439 200,428 359,559 891,286
EnvPool (numa+async) / / 373,169 1,069,922
Mujoco Highest FPS Laptop (12) Workstation (32) TPU-VM (96) DGX-A100 (256)
For-loop 12,861 20,298 10,474 11,569
Subprocess 36,586 105,432 87,403 163,656
Sample-Factory 62,510 309,264 461,515 1,573,262
EnvPool (sync) 66,622 380,950 296,681 949,787
EnvPool (async) 105,126 582,446 887,540 2,363,864
EnvPool (numa+async) / / 896,830 3,134,287

Please refer to the benchmark page for more details.

API Usage

The following content shows both synchronous and asynchronous API usage of EnvPool. You can also run the full script at examples/env_step.py

Synchronous API

import envpool
import numpy as np

# make gym env
env = envpool.make("Pong-v5", env_type="gym", num_envs=100)
# or use envpool.make_gym(...)
obs = env.reset()  # should be (100, 4, 84, 84)
act = np.zeros(100, dtype=int)
obs, rew, term, trunc, info = env.step(act)

Under the synchronous mode, envpool closely resembles openai-gym/dm-env. It has the reset and step functions with the same meaning. However, there is one exception in envpool: batch interaction is the default. Therefore, during the creation of the envpool, there is a num_envs argument that denotes how many envs you like to run in parallel.

env = envpool.make("Pong-v5", env_type="gym", num_envs=100)

The first dimension of action passed to the step function should equal num_envs.

act = np.zeros(100, dtype=int)

You don't need to manually reset one environment when any of done is true; instead, all envs in envpool have enabled auto-reset by default.

Asynchronous API

import envpool
import numpy as np

# make asynchronous
num_envs = 64
batch_size = 16
env = envpool.make("Pong-v5", env_type="gym", num_envs=num_envs, batch_size=batch_size)
action_num = env.action_space.n
env.async_reset()  # send the initial reset signal to all envs
while True:
    obs, rew, term, trunc, info = env.recv()
    env_id = info["env_id"]
    action = np.random.randint(action_num, size=batch_size)
    env.send(action, env_id)

In the asynchronous mode, the step function is split into two parts: the send/recv functions. send takes two arguments, a batch of action, and the corresponding env_id that each action should be sent to. Unlike step, send does not wait for the envs to execute and return the next state, it returns immediately after the actions are fed to the envs. (The reason why it is called async mode).

env.send(action, env_id)

To get the "next states", we need to call the recv function. However, recv does not guarantee that you will get back the "next states" of the envs you just called send on. Instead, whatever envs finishes execution gets recved first.

state = env.recv()

Besides num_envs, there is one more argument batch_size. While num_envs defines how many envs in total are managed by the envpool, batch_size specifies the number of envs involved each time we interact with envpool. e.g. There are 64 envs executing in the envpool, send and recv each time interacts with a batch of 16 envs.

envpool.make("Pong-v5", env_type="gym", num_envs=64, batch_size=16)

There are other configurable arguments with envpool.make; please check out EnvPool Python interface introduction.

Contributing

EnvPool is still under development. More environments will be added, and we always welcome contributions to help EnvPool better. If you would like to contribute, please check out our contribution guideline.

License

EnvPool is under Apache2 license.

Other third-party source-code and data are under their corresponding licenses.

We do not include their source code and data in this repo.

Citing EnvPool

If you find EnvPool useful, please cite it in your publications.

@inproceedings{weng2022envpool,
 author = {Weng, Jiayi and Lin, Min and Huang, Shengyi and Liu, Bo and Makoviichuk, Denys and Makoviychuk, Viktor and Liu, Zichen and Song, Yufan and Luo, Ting and Jiang, Yukun and Xu, Zhongwen and Yan, Shuicheng},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {S. Koyejo and S. Mohamed and A. Agarwal and D. Belgrave and K. Cho and A. Oh},
 pages = {22409--22421},
 publisher = {Curran Associates, Inc.},
 title = {Env{P}ool: A Highly Parallel Reinforcement Learning Environment Execution Engine},
 url = {https://proceedings.neurips.cc/paper_files/paper/2022/file/8caaf08e49ddbad6694fae067442ee21-Paper-Datasets_and_Benchmarks.pdf},
 volume = {35},
 year = {2022}
}

Disclaimer

This is not an official Sea Limited or Garena Online Private Limited product.

More Repositories

1

EditAnything

Edit anything in images powered by segment-anything, ControlNet, StableDiffusion, etc. (ACM MM)
Python
3,256
star
2

poolformer

PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)
Python
1,290
star
3

volo

VOLO: Vision Outlooker for Visual Recognition
Jupyter Notebook
922
star
4

Adan

Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models
Python
743
star
5

MDT

Masked Diffusion Transformer is the SOTA for image synthesis. (ICCV 2023)
Python
494
star
6

metaformer

MetaFormer Baselines for Vision (TPAMI 2024)
Python
414
star
7

lorahub

The official repository of paper "LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition".
Python
380
star
8

mvp

NeurIPS-2021: Direct Multi-view Multi-person 3D Human Pose Estimation
Python
324
star
9

CLoT

CVPR'24, Official Codebase of our Paper: "Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation".
Python
290
star
10

inceptionnext

InceptionNeXt: When Inception Meets ConvNeXt (CVPR 2024)
Python
245
star
11

iFormer

iFormer: Inception Transformer
Python
226
star
12

ptp

[CVPR2023] The code for 《Position-guided Text Prompt for Vision-Language Pre-training》
Python
148
star
13

BindDiffusion

BindDiffusion: One Diffusion Model to Bind Them All
Python
140
star
14

sailor-llm

⚓️ Sailor: Open Language Models for South-East Asia
Python
87
star
15

FDM

The official PyTorch implementation of Fast Diffusion Model
Python
83
star
16

mugs

A PyTorch implementation of Mugs proposed by our paper "Mugs: A Multi-Granular Self-Supervised Learning Framework".
Python
78
star
17

Agent-Smith

[ICML2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast
Python
69
star
18

sdft

[ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".
Shell
67
star
19

symbolic-instruction-tuning

The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".
Python
58
star
20

scaling-with-vocab

📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623
Python
52
star
21

ScaleLong

The official repository of paper "ScaleLong: Towards More Stable Training of Diffusion Model via Scaling Network Long Skip Connection" (NeurIPS 2023)
Python
47
star
22

VGT

Video Graph Transformer for Video Question Answering (ECCV'22)
Python
44
star
23

jax_xc

Exchange correlation functionals translated from libxc to jax
Python
43
star
24

d4ft

A JAX library for Density Functional Theory.
Python
40
star
25

finetune-fair-diffusion

Code of the paper: Finetuning Text-to-Image Diffusion Models for Fairness
Python
38
star
26

dice

Official implementation of Bootstrapping Language Models via DPO Implicit Rewards
Python
36
star
27

ILD

Imitation Learning via Differentiable Physics
Python
33
star
28

GP-Nerf

Official implementation for GP-NeRF (ECCV 2022)
Python
33
star
29

Consistent3D

The official PyTorch implementation of Consistent3D (CVPR 2024)
Python
33
star
30

edp

[NeurIPS 2023] Efficient Diffusion Policy
Python
32
star
31

rosmo

Codes for "Efficient Offline Policy Optimization with a Learned Model", ICLR2023
Python
28
star
32

MMCBench

Python
27
star
33

GDPO

Graph Diffusion Policy Optimization
Python
24
star
34

dualformer

Python
23
star
35

hloenv

an environment based on XLA for deep learning compiler optimization research.
C++
23
star
36

DiffMemorize

On Memorization in Diffusion Models
Python
21
star
37

optim4rl

Optim4RL is a Jax framework of learning to optimize for reinforcement learning.
Python
21
star
38

TEC

Python
15
star
39

numcc

NU-MCC: Multiview Compressive Coding with Neighborhood Decoder and Repulsive UDF
Python
12
star
40

PatchAIL

Implementation of PatchAIL in the ICLR 2023 paper <Visual Imitation with Patch Rewards>
Python
12
star
41

offbench

Python
11
star
42

OPER

code for the paper Offline Prioritized Experience Replay
Jupyter Notebook
11
star
43

win

Python
4
star
44

P-DoS

[ArXiv 2024] Denial-of-Service Poisoning Attacks on Large Language Models
Python
4
star
45

sailcompass

Python
3
star
46

SLRLA-optimizer

Python
2
star
47

Cheating-LLM-Benchmarks

Jupyter Notebook
2
star
48

I-FSJ

Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses
Python
2
star
49

MISA

[NeurIPS 2023] Mutual Information Regularized Offline Reinforcement Learning
Python
1
star