• Stars
    star
    119
  • Rank 296,189 (Top 6 %)
  • Language
    Python
  • Created over 2 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Bird's-Eye-View Panoptic Segmentation Using Monocular Frontal View Images. http://panoptic-bev.cs.uni-freiburg.de

Bird's-Eye-View Panoptic Segmentation Using Monocular Frontal View Images

This repository contains the PyTorch implementation of the PanopticBEV model proposed in our RA-L 2021 paper Bird's-Eye-View Panoptic Segmentation Using Monocular Frontal View Images.

Our approach, PanopticBEV, is the state-of-the-art approach for generating panoptic segmentation maps in the bird's eye view using only monocular frontal view images.

PanopticBEV Teaser

If you find this code useful for your research, please consider citing our paper:

@article{gosala21bev,
  author={Gosala, Nikhil and Valada, Abhinav},
  journal={IEEE Robotics and Automation Letters}, 
  title={Bird’s-Eye-View Panoptic Segmentation Using Monocular Frontal View Images}, 
  year={2022},
  volume={7},
  number={2},
  pages={1968-1975},
  doi={10.1109/LRA.2022.3142418}}

Relevant links

System requirements

  • Linux (Tested on Ubuntu 18.04)
  • Python3 (Tested using Python 3.6.9)
  • PyTorch (Tested using PyTorch 1.8.1)
  • CUDA (Tested using CUDA 11.1)

Installation

a. Create a python virtual environment and activate it.

python3 -m venv panoptic_bev
source panoptic_bev/bin/activate

b. Update pip to the latest version.

python3 -m pip install --upgrade pip

c. Install the required python dependencies using the provided requirements.txt file.

pip3 install -r requirements.txt

d. Install the PanopticBEV code.

python3 setup.py develop

PanopticBEV datasets

KITTI-360

  • Download the Kitti-360 dataset from here.
  • Download the Kitti-360 PanopticBEV dataset from here.
  • In the training and evaluation scripts:
    • Modify the dataset_root_dir parameter to point to the location of the original Kitti-360 dataset.
    • Modify the seam_root_dir parameter to point to the location of the Kitti-360 PanopticBEV dataset.

nuScenes

  • Download the nuScenes dataset from here.
  • Download the nuScenes PanopticBEV dataset from here.
  • In the training and evaluation scripts:
    • Modify the dataset_root_dir parameter to point to the location of the original nuScenes dataset.
    • Modify the seam_root_dir parameter to point to the location of the nuScenes PanopticBEV dataset.

Code execution

Configuration parameters

The configuration parameters of the model such as the learning rate, batch size, and dataloader options are stored in the experiments/config folder. If you intend to modify the model parameters, please do so here.

Training and evaluation

The training and evaluation python codes along with the shell scripts to execute them are provided in the scripts folder. Before running the shell scripts, please fill in the missing parameters with your computer-specific data paths and parameters.

To train the model, execute the following command after replacing * with either kitti or nuscenes.

bash train_panoptic_bev_*.sh

To evaluate the model, execute the following command after replacing * with either kitti or nuscenes.

bash eval_panoptic_bev_*.sh 

Acknowledgements

This work was supported by the Federal Ministry of Education and Research (BMBF) of Germany under ISA 4.0 and by the Eva Mayr-Stihl Stiftung.

This project contains code adapted from other open-source projects. We especially thank the authors of:

License

This code is released under the GPLv3 for academic usage. For commercial usage, please contact Nikhil Gosala.

More Repositories

1

LCDNet

PyTorch code for training LCDNet for loop closure detection in LiDAR SLAM. http://rl.uni-freiburg.de/research/lidar-slam-lc
Python
159
star
2

ros_sam

ROS wrapper for Meta's Segment-Anything model
CMake
143
star
3

CL-SLAM

Continual SLAM: Beyond Lifelong Simultaneous Localization and Mapping through Continual Learning. http://continual-slam.cs.uni-freiburg.de
Python
122
star
4

EfficientLPS

PyTorch code for training EfficientLPS for LiDAR panoptic segmentation. https://rl.uni-freiburg.de/research/lidar-panoptic
Python
92
star
5

MM-DistillNet

PyTorch code for training MM-DistillNet for multimodal knowledge distillation. http://rl.uni-freiburg.de/research/multimodal-distill
Python
58
star
6

PADLoC

LiDAR-Based Deep Loop Closure Detection and Registration using Panoptic Attention
Python
50
star
7

CURB-SG

[ICRA 2024] Collaborative Dynamic 3D Scene Graphs for Automated Driving
C++
46
star
8

mobile-rl

Learning Navigation for Arbitrary Mobile Manipulation Motions in Unseen and Dynamic Environments. http://mobile-rl.cs.uni-freiburg.de
Python
43
star
9

MoMa-LLM

Language-Grounded Dynamic Scene Graphs for Interactive Object Search with Mobile Manipulation. Project website: http://moma-llm.cs.uni-freiburg.de
Python
37
star
10

BEVCar

[IROS2024] Camera-Radar Fusion for BEV Map and Object Segmentation
Python
33
star
11

Batch3DMOT

3D Multi-Object Tracking Using Graph Neural Networks with Cross-Edge Modality Attention. http://batch3dmot.cs.uni-freiburg.de
Python
31
star
12

Panoptic-Tracking

Python
25
star
13

SPINO

Few-Shot Panoptic Segmentation With Foundation Models
Python
24
star
14

CoDEPS

Continual Learning for Depth Estimation and Panoptic Segmentation
Python
24
star
15

SkyEye

SkyEye: Self-Supervised Bird's-Eye-View Semantic Mapping Using Monocular Frontal View Images
Python
23
star
16

DynaFill

Dynamic Object Removal and Spatio-Temporal RGB-D Inpainting via Geometry-Aware Adversarial Learning
Python
22
star
17

CARTO

Official Implementation of CARTO: Category and Joint Agnostic Reconstruction of ARTiculated Objects
Jupyter Notebook
20
star
18

kinematic-feasibility-rl

Learning Kinematic Feasibility through Reinforcement Leanring: http://rl.uni-freiburg.de/research/kinematic-feasibility-rl
EmberScript
19
star
19

MDPCalib

Automatic Target-Less Camera-LiDAR Calibration from Motion and Deep Point Correspondences
16
star
20

RaLF

RaLF: Flow-based Global and Metric Radar Localization in LiDAR Maps
Python
14
star
21

HIMOS

Learning Hierarchical Interactive Multi-Object Search for Mobile Manipulation. Project website: http://himos.cs.uni-freiburg.de
Python
13
star
22

CEILing

Python
13
star
23

Active-Particle-Filter-Networks

Official repository for Active Particle Filter Networks: Efficient Active Localization in Continuous Action Spaces and Large Maps
Python
11
star
24

CenterGrasp

Python
10
star
25

Multi-Object-Search

Learning Long-Horizon Robot Exploration Strategies for Multi-Object Search in Continuous Action Spaces. http://multi-object-search.cs.uni-freiburg.de
Python
10
star
26

Dav-Nav

Catch Me If You Hear Me: Audio-Visual Navigation in Complex Unmapped Environments with Moving Sounds. http://dav-nav.cs.uni-freiburg.de
Python
8
star
27

PASTEL

A Good Foundation is Worth Many Labels: Label-Efficient Panoptic Segmentation
7
star
28

amodal-panoptic

Python
6
star
29

Semantic-Search

Perception Matters: Enhancing Embodied AI with Uncertainty-Aware Semantic Segmentation. Project Website: http://semantic-search.cs.uni-freiburg.de
Jupyter Notebook
5
star
30

TAPAS

PyTorch code for TAPAS-GMM.
4
star
31

bopt_gmm

Shell
2
star
32

bask

PyTorch code for Bayesian Scene Keypoints.
Python
2
star
33

APSNet

Python
1
star
34

rl_tasks

Python
1
star
35

PAPS

Python
1
star
36

INoD

INoD: Injected Noise Discriminator for Self-Supervised Representation Learning in Agricultural Fields.
Python
1
star