• Stars
    star
    757
  • Rank 59,989 (Top 2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created about 3 years ago
  • Updated about 1 month ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

MetaDrive: Open-source driving simulator


MetaDrive: Composing Diverse Driving Scenarios for Generalizable RL


MetaDrive is a driving simulator with the following key features:

  • Compositional: It supports generating infinite scenes with various road maps and traffic settings for the research of generalizable RL.
  • Lightweight: It is easy to install and run. It can run up to 300 FPS on a standard PC.
  • Realistic: Accurate physics simulation and multiple sensory input including Lidar, RGB images, top-down semantic map and first-person view images.

πŸ›  Quick Start

Install MetaDrive via:

git clone https://github.com/metadriverse/metadrive.git
cd metadrive
pip install -e .

or

pip install metadrive-simulator

Note that the program is tested on both Linux and Windows. Some control and display issues in MacOS wait to be solved

You can verify the installation of MetaDrive via running the testing script:

# Go to a folder where no sub-folder calls metadrive
python -m metadrive.examples.profile_metadrive

Note that please do not run the above command in a folder that has a sub-folder called ./metadrive.

πŸš• Examples

We provide examples to demonstrate features and basic usages of MetaDrive after the local installation. Or you can run some examples directly in Colab. Open In Colab

Single Agent Environment

Run the following command to launch a simple driving scenario with auto-drive mode on. Press W, A, S, D to drive the vehicle manually.

python -m metadrive.examples.drive_in_single_agent_env

Run the following command to launch a safe driving scenario, which includes more complex obstacles and cost to be yielded.

python -m metadrive.examples.drive_in_safe_metadrive_env

Multi-Agent Environment

You can also launch an instance of Multi-Agent scenario as follows

python -m metadrive.examples.drive_in_multi_agent_env --env roundabout

--env accepts following parmeters: roundabout (default), intersection, tollgate, bottleneck, parkinglot, pgmap. Adding --top_down can launch top-down pygame renderer.

Real Environment

Running the following script enables driving in a scenario constructed from Waymo motion dataset.

python -m metadrive.examples.drive_in_waymo_env

Traffic vehicles can not response to surrounding vchicles if directly replaying them. Add argument --reactive_traffic to use an IDM policy control them and make them reactive. Press key r for loading a new scenario, and b or q for switching perspective.

Basic Usage

To build the RL environment in python script, you can simply code in the Farama Gymnasium format as:

import metadrive  # Import this package to register the environment!
import gymnasium as gym

env = gym.make("MetaDrive-validation-v0", config={"use_render": True})

# Alternatively, you can instantiate using the class
# env = metadrive.MetaDriveEnv(config={"use_render": True, "num_scenarios": 100})

env.reset()
for i in range(1000):
    obs, reward, terminated, truncated, info = env.step(env.action_space.sample())  # Use random policy
    if terminated or truncated:
        env.reset()
env.close()

🏫 Documentations

Find more details in: MetaDrive

πŸ“Ž References

If you use MetaDrive in your own work, please cite:

@article{li2022metadrive,
  title={Metadrive: Composing diverse driving scenarios for generalizable reinforcement learning},
  author={Li, Quanyi and Peng, Zhenghao and Feng, Lan and Zhang, Qihang and Xue, Zhenghai and Zhou, Bolei},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2022}
}

πŸŽ‰ Relevant Projects

Learning to Simulate Self-driven Particles System with Coordinated Policy Optimization
Zhenghao Peng, Quanyi Li, Chunxiao Liu, Bolei Zhou
NeurIPS 2021
[Paper] [Code] [Webpage] [Poster] [Talk] [Results&Models]

Safe Driving via Expert Guided Policy Optimization
Zhenghao Peng*, Quanyi Li*, Chunxiao Liu, Bolei Zhou
Conference on Robot Learning (CoRL) 2021
[Paper] [Code] [Webpage] [Poster]

Efficient Learning of Safe Driving Policy via Human-AI Copilot Optimization
Quanyi Li*, Zhenghao Peng*, Bolei Zhou
ICLR 2022
[Paper] [Code] [Webpage] [Poster] [Talk]

Human-AI Shared Control via Policy Dissection
Quanyi Li, Zhenghao Peng, Haibin Wu, Lan Feng, Bolei Zhou
NeurIPS 2022
[Paper] [Code] [Webpage]

And more:

  • Yang, Yujie, Yuxuan Jiang, Yichen Liu, Jianyu Chen, and Shengbo Eben Li. "Model-Free Safe Reinforcement Learning through Neural Barrier Certificate." IEEE Robotics and Automation Letters (2023).

  • Feng, Lan, Quanyi Li, Zhenghao Peng, Shuhan Tan, and Bolei Zhou. "TrafficGen: Learning to Generate Diverse and Realistic Traffic Scenarios." (ICRA 2023)

  • Zhenghai Xue, Zhenghao Peng, Quanyi Li, Zhihan Liu, Bolei Zhou. "Guarded Policy Optimization with Imperfect Online Demonstrations." (ICLR 2023)

build Documentation GitHub license GitHub contributors

Acknowledgement

The simulator can not be built without the help from Panda3D community and the following open-sourced projects: