• Stars
    star
    191
  • Rank 202,877 (Top 4 %)
  • Language
    Python
  • License
    MIT License
  • Created over 4 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Graph Neural Networks for Decentralized Path Planning

News 2021

We created a new repo: magat_pathplanning that integrated this repo and MAGAT (RAL2021) with several major updates that provide training speed-up, improvements to simulator, rework of code structure, and clearer comments.

We highly recommend to use the new repository for replicating and experimenting the GNN path-planner in this page.

PyTorch Project for Graph Neural Network based MAPF

Code accompanying the paper

Graph Neural Networks for Decentralized Multi-Robot Path Planning

from Qingbiao Li (1), Fernando Gama (2), Alejandro Ribeiro (2), Amanda Prorok (1) at University of Cambridge (1) and at University of Pennsylvania (2).

Table of Contents:

Project Diagram:

alt text

Framework Structure:

The repo has the following structure:

β”œβ”€β”€ agents (overall framework for training and testing)
|  └── base.py
|  └── decentralplannerlocal.py
|  └──      (DCP) 
|  └── decentralplannerlocal_OnlineExpert.py
|  └──      (DCP with onlin expert mechanism) 
|
β”œβ”€β”€ configs (set up key parameters for training and inference stage,)
|  └── dcp_ECBS.json
|  └── dcp_onlineExpert.json
|
β”œβ”€β”€ dataloader (load data for training)
|  └── Dataloader_dcplocal_notTF_onlineExpert.py
|
β”œβ”€β”€ graphs 
|  └── models (model including CNN -> GNN -> MLP)
|  |  |
|  |  └── decentralplanner.py 
|  |
|  └── losses
|     └── cross_entropy.py
|
β”œβ”€β”€ utils
|  |
|  └── assets
|  |  └── dataTools.py
|  |  └── graphML.py
|  |  └── graphTools.py
|  |
|  └── multirobotsim_dcenlocal.py 
|  └──      (simulator for dencentral agents)
|  └── multirobotsim_dcenlocal_onlineExpert.py 
|  └──      (simulator for dencentral agents with online expert mechanism, where failure is saved.)
|  └── visualize.py 
|  └──      (visualize the predicted path with communcation link.)
|  └── visualize_expertAlg.py
|  └──      (visualize the ground truth path.)
|  └── metrics.py 
|  └──      (Record stastics during inference stage.)
|  └── config.py 
|
β”œβ”€β”€ offlineExpert
|  |
|  └── CasesSolver.py
|  └──       1, (# generate map) Randomly generate map with customized obstacle density and obstacle, 
|  └──       2. (# case under a map)
|  └──           At each specific map, generate random pairs of start and goal position for each agents.
|  └──       3. (for given case) Apply expert algorithm to compute solution.
|  |
|  └── DataGen_Transformer.py
|  └──      (Transform the solution into specific data format that ready to be loaded by dataloader.
|  └──          including: map, input tensor wiith each agents paths, GSO.)
|
β”œβ”€β”€ onlineExpert
|  |
|  └── ECBS_onlineExpert.py
|  └──      (Apply expert algorithm to compute solution for failture cases recorded during training process.)
|  |
|  └── DataTransformer_local_onlineExpert.py
|  └──      (Transform the solution into specific data format, and then merged into offline dataset.)
|
β”œβ”€β”€ experiments
|
β”œβ”€β”€ data
|
β”œβ”€β”€ statistic_analysis 
|  └── (Fig.3.) result_analysis_errorbar.py 
|  └── (Fig.4.) result_analysis_generalization_colormap.py
|  └── (Fig.5.) result_analysis_hist_impact_3K.py
|
└──  main.py

Requirements:

easydict>=1.7
matplotlib>=3.1.2
numpy>=1.14.5
Pillow>=5.2.0
scikit-image>=0.14.0
scikit-learn>=0.19.1
scipy>=1.1.0
tensorboardX>=1.2
torch>=1.1.0
torchvision>=0.3.0

How to use this repo:

Test trained network, for exmaple DCP OE - K=3

  1. Download the dataset and trained network.
  2. changes the 'data_root' and 'save_data' in ./configs/dcp_onlineExpert.json and then run
python main.py configs/dcp_onlineExpert.json --mode test --log_anime  --best_epoch --test_general --log_time_trained 1582034757   --nGraphFilterTaps 3 --map_w 20  --num_agents 10  --trained_num_agents 10 --trained_map_w 20

Train a new network, with DCP OE - K=3

python main.py configs/dcp_onlineExpert.json --mode train  --map_w 20 --nGraphFilterTaps 3  --num_agents 10  --trained_num_agents 10

More setting can be found in scrips

Visualization

python ./utils/visualize.py --map [Path_to_Cases]/successCases_ID00000.yaml --schedule  [Path_to_Cases]/predict_success/successCases_ID00000.yaml --GSO  [Path_to_Cases]/GSO/successCases_ID00000.mat --speed 2 --video [predict_success]/video.mp4 --nGraphFilterTaps 2 --id_chosenAgent 0

where [Path_to_Cases] is defined by where the 'Results/AnimeDemo'.

License:

This work based on a Scalable template by Hager Rady and Mo'men AbdelRazek

The graph neural network module of this work based on the GNN library from Alelab at University of Pennsylvania.

The project of graph mapf is licensed under MIT License - see the LICENSE file for details

Citation:

If you use this paper in an academic work, please cite:

@article{li2019graph,
  title={Graph Neural Networks for Decentralized Multi-Robot Path Planning},
  author={Li, Qingbiao and Gama, Fernando and Ribeiro, Alejandro and Prorok, Amanda},
  journal={arXiv preprint arXiv:1912.06095},
  year={2019}
}

More Repositories

1

VectorizedMultiAgentSimulator

VMAS is a vectorized differentiable simulator designed for efficient Multi-Agent Reinforcement Learning benchmarking. It is comprised of a vectorized 2D physics engine written in PyTorch and a set of challenging multi-robot scenarios. Additional scenarios can be implemented through a simple and modular interface.
Python
312
star
2

popgym

Partially Observable Process Gym
Python
160
star
3

magat_pathplanning

Python
98
star
4

rllib_differentiable_comms

This is a minimal example to demonstrate how multi-agent reinforcement learning with differentiable communication channels and centralized critics can be realized in RLLib. This example serves as a reference implementation and starting point for making RLLib more compatible with such architectures.
Python
39
star
5

minicar

Python
37
star
6

ros2_multi_agent_passage

Python
35
star
7

HetGPPO

Heterogeneous Multi-Robot Reinforcement Learning
Python
32
star
8

rl_multi_agent_passage

Repository containing RL environment, model and trainer for GNN demo for ICRA 2022 paper "A Framework for Real-World Multi-Robot Systems\\Running Decentralized GNN-Based Policies"
Python
31
star
9

adversarial_comms

Python
30
star
10

ffm

Reinforcement Learning with Fast and Forgetful Memory
Python
23
star
11

gnngls

Code accompanying the paper Graph Neural Network Guided Local Search for the Traveling Salesperson Problem
Python
21
star
12

DVM-SLAM

Jupyter Notebook
17
star
13

ModGNN

Python
16
star
14

graph-conv-memory

Graph convolutional memory
Python
15
star
15

ControllingBehavioralDiversity

This repository contains the code for Diversity Control (DiCo), a novel method to constrain behavioral diversity in multi-agent reinforcement learning.
Python
12
star
16

cambridge-robomaster

This is the source repository containing all information necessary to reproduce the Cambridge RoboMaster platform.
Python
9
star
17

private_flocking

TeX
7
star
18

task-agnostic-comms

Task-Agnostic Communication for Multi-Agent Reinforcement Learning
Python
4
star
19

resilient-fusion

Python
3
star
20

robomaster_ros2_can

ROS2 driver to control RoboMaster S1 using the internal CAN interface
C++
3
star
21

sensor-guided-visual-nav

Python
3
star
22

xaer

Python
2
star
23

robomaster_sdk_can

C++ library to command the RoboMaster S1 through the internal CAN bus
C++
2
star
24

memory-monoids

Python
2
star