• Stars
    star
    159
  • Rank 234,577 (Top 5 %)
  • Language
    Python
  • License
    GNU General Publi...
  • Created over 3 years ago
  • Updated 7 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

PyTorch code for training LCDNet for loop closure detection in LiDAR SLAM. http://rl.uni-freiburg.de/research/lidar-slam-lc

LCDNet: Deep Loop Closure Detection and Point Cloud Registration for LiDAR SLAM (IEEE T-RO 2022)

Official PyTorch implementation of LCDNet.

Installation

You can install LCDNet locally on your machine, or use the provided Dockerfile to run it in a container. The environment_lcdnet.yml file is meant to be used with docker, as it contains version of packages that are specific to a CUDA version. We don't recommend using it for local installation.

Local Installation

  1. Install PyTorch (make sure to select the correct cuda version)
  2. Install the requirements pip install -r requirements.txt
  3. Install spconv <= 2.1.25 (make sure to select the correct cuda version, for example pip install spconv-cu113==2.1.25 for cuda 11.3)
  4. Install OpenPCDet
  5. Install faiss-cpu - NOTE: avoid installing faiss via pip, use the conda version, or build it from source alternatively.

Tested in the following environments:

  • Ubuntu 18.04/20.04/22.04
  • cuda 10.2/11.1/11.3
  • pytorch 1.8/1.9/1.10
  • Open3D 0.12.0

Note

We noticed that the RANSAC implementation in Open3D version >=0.15 achieves bad results. We tested our code with Open3D versions between 0.12.0 and 0.14.2, please use one of these versions, as results might be very different otherwise.

We also noticed that spconv version 2.2 or higher is not compatible with the pretrained weights provided with this repository. Spconv version 2.1.25 or lower is required to properly load the pretrained model.

Docker

  1. Install Docker and NVIDIA-Docker (see here for instructions)
  2. Download the pretrained model (see Pretrained model section) in the same folder as the Dockerfile
  3. Build the docker image docker build --tag lcdnet -f Dockerfile .
  4. Run the docker container docker run --gpus all -it --rm -v KITTI_ROOT:/data/KITTI lcdnet
  5. From inside the container, activate the anaconda environment conda activate lcdnet and change directory to the LCDNet folder cd LCDNet
  6. Run the training or evaluation scripts (see Training and Evaluation sections). The weights of the pretrained model are copied inside the container under /pretreined_models/LCDNet-kitti360.tar.

Preprocessing

KITTI

Download the SemanticKITTI dataset and generate the loop ground truths:

python -m data_process.generate_loop_GT_KITTI --root_folder KITTI_ROOT

where KITTI_ROOT is the path where you downloaded and extracted the SemanticKITTI dataset.

NOTE: although the semantic labels are not required to run LCDNet, we use the improved ground truth poses provided with the SemanticKITTI dataset.

KITTI-360

Download the KITTI-360 dataset (raw velodyne scans, calibrations and vehicle poses) and generate the loop ground truths:

python -m data_process.generate_loop_GT_KITTI360 --root_folder KITTI360_ROOT

where KITTI360_ROOT is the path where you downloaded and extracted the KITTI-360 dataset.

Optional: Ground Plane Removal

To achieve better results, it is suggested to preprocess the datasets by removing the ground plane:

python -m data_process.remove_ground_plane_kitti --root_folder KITT_ROOT

python -m data_process.remove_ground_plane_kitti360 --root_folder KITT_ROOT360

If you skip this step, please remove the option --without_ground in all the following steps.

Training

The training script will use all the available GPUs, if you want to use only a subset of the GPUs, use the environment variable CUDA_VISIBLE_DEVICES. If you don't know how to do that, check here.

To train on the KITTI dataset:

python -m training_KITTI_DDP --root_folder KITTI_ROOT --dataset kitti --batch_size B --without_ground

To train on the KITTI-360 dataset:

python -m training_KITTI_DDP --root_folder KITTI360_ROOT --dataset kitti360 --batch_size B --without_ground

To track the training progress using Weights & Biases, add the argument --wandb. The per-GPU batch size B must be at least 2, and a GPU with at least 8GB of RAM is required (12GB or more is preferred). In our experiments we used a batch size of 6 on 4 x 24GB GPUs, for a total batch size of 24.

The network's weights will be saved in the folder ./checkpoints (you can change this folder with the argument --checkpoints_dest), inside a subfolder named with the starting date and time of the training (format %d-%m-%Y_%H-%M-%S), for example: 20-02-2022_16-38-24

Evaluation

Loop Closure

To evaluate the loop closure performance of the trained model on the KITTI dataset:

python -m evaluation.inference_loop_closure --root_folder KITTI_ROOT --dataset kitti --validation_sequence 08 --weights_path WEIGHTS --without_ground

where WEIGHTS is the path of the pretrained model, for example ./checkpoints/20-02-2022_16-38-24/checkpoint_last_iter.tar

Similarly, on the KITTI-360 dataset:

python -m evaluation.inference_loop_closure --root_folder KITTI360_ROOT --dataset kitti360 --validation_sequence 2013_05_28_drive_0002_sync --weights_path WEIGHTS --without_ground

Point Cloud Registration

To evaluate the loop closure performance of the trained model on the KITTI and KITTI-360 dataset:

python -m evaluation.inference_yaw_general --root_folder KITTI_ROOT --dataset kitti --validation_sequence 08 --weights_path WEIGHTS --ransac --without_ground

python -m evaluation.inference_yaw_general --root_folder KITTI360_ROOT --dataset kitti360 --validation_sequence 2013_05_28_drive_0002_sync --weights_path WEIGHTS --ransac --without_ground

To evaluate LCDNet (fast), remove the --ransac argument.

Pretrained Model

A model pretrained on the KITTI-360 dataset can be found here

Paper

"LCDNet: Deep Loop Closure Detection and Point Cloud Registration for LiDAR SLAM"

If you use LCDnet, please cite:

@ARTICLE{cattaneo2022tro,
  author={Cattaneo, Daniele and Vaghi, Matteo and Valada, Abhinav},
  journal={IEEE Transactions on Robotics}, 
  title={LCDNet: Deep Loop Closure Detection and Point Cloud Registration for LiDAR SLAM}, 
  year={2022},
  volume={},
  number={},
  pages={1-20},
  doi={10.1109/TRO.2022.3150683}
 }

Contacts

License

For academic usage, the code is released under the GPLv3 license. For any commercial purpose, please contact the authors.

More Repositories

1

ros_sam

ROS wrapper for Meta's Segment-Anything model
CMake
143
star
2

CL-SLAM

Continual SLAM: Beyond Lifelong Simultaneous Localization and Mapping through Continual Learning. http://continual-slam.cs.uni-freiburg.de
Python
122
star
3

PanopticBEV

Bird's-Eye-View Panoptic Segmentation Using Monocular Frontal View Images. http://panoptic-bev.cs.uni-freiburg.de
Python
119
star
4

EfficientLPS

PyTorch code for training EfficientLPS for LiDAR panoptic segmentation. https://rl.uni-freiburg.de/research/lidar-panoptic
Python
92
star
5

MM-DistillNet

PyTorch code for training MM-DistillNet for multimodal knowledge distillation. http://rl.uni-freiburg.de/research/multimodal-distill
Python
58
star
6

PADLoC

LiDAR-Based Deep Loop Closure Detection and Registration using Panoptic Attention
Python
50
star
7

CURB-SG

[ICRA 2024] Collaborative Dynamic 3D Scene Graphs for Automated Driving
C++
46
star
8

mobile-rl

Learning Navigation for Arbitrary Mobile Manipulation Motions in Unseen and Dynamic Environments. http://mobile-rl.cs.uni-freiburg.de
Python
43
star
9

MoMa-LLM

Language-Grounded Dynamic Scene Graphs for Interactive Object Search with Mobile Manipulation. Project website: http://moma-llm.cs.uni-freiburg.de
Python
37
star
10

BEVCar

[IROS2024] Camera-Radar Fusion for BEV Map and Object Segmentation
Python
33
star
11

Batch3DMOT

3D Multi-Object Tracking Using Graph Neural Networks with Cross-Edge Modality Attention. http://batch3dmot.cs.uni-freiburg.de
Python
31
star
12

Panoptic-Tracking

Python
25
star
13

SPINO

Few-Shot Panoptic Segmentation With Foundation Models
Python
24
star
14

CoDEPS

Continual Learning for Depth Estimation and Panoptic Segmentation
Python
24
star
15

SkyEye

SkyEye: Self-Supervised Bird's-Eye-View Semantic Mapping Using Monocular Frontal View Images
Python
23
star
16

DynaFill

Dynamic Object Removal and Spatio-Temporal RGB-D Inpainting via Geometry-Aware Adversarial Learning
Python
22
star
17

CARTO

Official Implementation of CARTO: Category and Joint Agnostic Reconstruction of ARTiculated Objects
Jupyter Notebook
20
star
18

kinematic-feasibility-rl

Learning Kinematic Feasibility through Reinforcement Leanring: http://rl.uni-freiburg.de/research/kinematic-feasibility-rl
EmberScript
19
star
19

MDPCalib

Automatic Target-Less Camera-LiDAR Calibration from Motion and Deep Point Correspondences
16
star
20

RaLF

RaLF: Flow-based Global and Metric Radar Localization in LiDAR Maps
Python
14
star
21

HIMOS

Learning Hierarchical Interactive Multi-Object Search for Mobile Manipulation. Project website: http://himos.cs.uni-freiburg.de
Python
13
star
22

CEILing

Python
13
star
23

Active-Particle-Filter-Networks

Official repository for Active Particle Filter Networks: Efficient Active Localization in Continuous Action Spaces and Large Maps
Python
11
star
24

CenterGrasp

Python
10
star
25

Multi-Object-Search

Learning Long-Horizon Robot Exploration Strategies for Multi-Object Search in Continuous Action Spaces. http://multi-object-search.cs.uni-freiburg.de
Python
10
star
26

Dav-Nav

Catch Me If You Hear Me: Audio-Visual Navigation in Complex Unmapped Environments with Moving Sounds. http://dav-nav.cs.uni-freiburg.de
Python
8
star
27

PASTEL

A Good Foundation is Worth Many Labels: Label-Efficient Panoptic Segmentation
7
star
28

amodal-panoptic

Python
6
star
29

Semantic-Search

Perception Matters: Enhancing Embodied AI with Uncertainty-Aware Semantic Segmentation. Project Website: http://semantic-search.cs.uni-freiburg.de
Jupyter Notebook
5
star
30

TAPAS

PyTorch code for TAPAS-GMM.
4
star
31

bopt_gmm

Shell
2
star
32

bask

PyTorch code for Bayesian Scene Keypoints.
Python
2
star
33

APSNet

Python
1
star
34

rl_tasks

Python
1
star
35

PAPS

Python
1
star
36

INoD

INoD: Injected Noise Discriminator for Self-Supervised Representation Learning in Agricultural Fields.
Python
1
star