• Stars
    star
    193
  • Rank 201,081 (Top 4 %)
  • Language
    Python
  • License
    Other
  • Created almost 6 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[CVPR2019 Oral] Self-supervised Point Cloud Map Estimation

DeepMapping: Unsupervised Map Estimation From Multiple Point Clouds

This repository contains PyTorch implementation associated with the paper:

"DeepMapping: Unsupervised Map Estimation From Multiple Point Clouds", Li Ding and Chen Feng, CVPR 2019 (Oral).

Citation

If you find DeepMapping useful in your research, please cite:

@InProceedings{Ding_2019_CVPR,
author = {Ding, Li and Feng, Chen},
title = {DeepMapping: Unsupervised Map Estimation From Multiple Point Clouds},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}

Dependencies

Requires Python 3.x, PyTorch, Open3D, and other common packages listed in requirements.txt

pip3 install -r requirements.txt

Running on GPU is highly recommended. The code has been tested with Python 3.6.5, PyTorch 0.4.0 and Open3D 0.4.0

Getting Started

Dataset

Simulated 2D point clouds are provided as ./data/2D/all_poses.tar. Extract the tar file:

tar -xvf ./data/2D/all_poses.tar -C ./data/2D/

A set of sub-directories will be created. For example, ./data/2D/v1_pose0 corresponds to the trajectory 0 sampled from the environment v1. In this folder, there are 256 local point clouds saved in PCD file format. The corresponding ground truth sensor poses is saved as gt_pose.mat file, which is a 256-by-3 matrix. The i-th row in the matrix represent the sensor pose [x,y,theta] for the i-th point cloud.

Solving Registration As Unsupervised Training

To run DeepMapping, execute the script

./script/run_train_2D.sh

By default, the results will be saved to ./results/2D/.

Warm Start

DeepMapping allows for seamless integration of a โ€œwarm startโ€ to reduce the convergence time with improved performance. Instead of starting from scratch, you can first perform a coarse registration of all point clouds using incremental ICP

./script/run_icp.sh

The coarse registration can be further improved by DeepMapping. To do so, simply set INIT_POSE=/PATH/TO/ICP/RESULTS/pose_est.npy in ./script/run_train_2D.sh. Please see the comments in the script for detailed instruction.

Evaluation

The estimated sensor pose is saved as numpy array pose_est.npy. To evaluate the registration, execute the script

./script/run_eval_2D.sh

Absolute trajectory error will be computed as error metrics.

Related Project

DeepMapping2 (CVPR'2023) for large-scale LiDAR mapping

More Repositories

1

SSCBench

SSCBench: A Large-Scale 3D Semantic Scene Completion Benchmark for Autonomous Driving
Python
166
star
2

DeepMapping2

[CVPR2023] DeepMapping2: Self-Supervised Large-Scale LiDAR Map Optimization
Python
156
star
3

DiscoNet

[NeurIPS2021] Learning Distilled Collaboration Graph for Multi-Agent Perception
136
star
4

peac

[ICRA2014] Fast Plane Extraction Using Agglomerative Hierarchical Clustering (AHC)
C++
134
star
5

Occ4cast

Occ4cast: LiDAR-based 4D Occupancy Completion and Forecasting
Python
112
star
6

V2X-Sim

[RA-L2022] V2X-Sim Dataset and Benchmark
111
star
7

FLAT

[ICCV2021 Oral] Fooling LiDAR by Attacking GPS Trajectory
Python
67
star
8

SPARE3D

[CVPR2020] A Dataset for SPAtial REasoning on Three-View Line Drawings
Python
52
star
9

MARS

[CVPR2024] Multiagent Multitraversal Multimodal Self-Driving: Open MARS Dataset
Python
36
star
10

NYU-VPR

[IROS2021] NYU-VPR: Long-Term Visual Place Recognition Benchmark with View Direction and Data Anonymization Influences
Python
31
star
11

EgoPAT3D

[CVPR 2022] Egocentric Action Target Prediction in 3D
Jupyter Notebook
29
star
12

LLM4VPR

Can multimodal LLM help visual place recognition?
Python
28
star
13

insta360_ros_driver

A ROS driver for Insta360 cameras, enabling real-time image capture, processing, and publishing in ROS environments.
Python
27
star
14

DeepExplorer

[RSS2023] Metric-Free Exploration for Topological Mapping by Task and Motion Imitation in Feature Space
Python
26
star
15

Self-Supervised-SPARE3D

[CVPR 2022] Self-supervised Spatial Reasoning on Multi-View Line Drawings
Python
24
star
16

RealCity3D

Jupyter Notebook
22
star
17

DeepSoRo

[RA-L/ICRA2020] Real-time Soft Body 3D Proprioception via Deep Vision-based Sensing
Python
22
star
18

FusionSense

Integrates the vision, touch, and common-sense information of foundational models, customized to the agent's perceptual needs.
Python
19
star
19

SNAC

[ICLR2023] Learning Simultaneous Navigation and Construction in Grid Worlds
Python
18
star
20

SeeDo

Human Demo Videos to Robot Action Plans
Python
16
star
21

TF-VPR

Self-supervised place recognition by exploring temporal and feature neighborhoods
Python
15
star
22

DeepParticleRobot

[ICRA'22] A Deep Reinforcement Learning Environment for Particle Robot Navigation and Object Manipulation
Python
13
star
23

pyAprilTag

python wrapper of AprilTag implemented in cv2cg, used for Robot Perception course
C++
10
star
24

NYC-Indoor-VPR

Python
9
star
25

BAGSFit

Primitive Fitting Using Deep Boundary Aware Geometric Segmentation
9
star
26

vis_nav_player

[ROB-GY 6203] Example Visual Navigation Player Code for Course Project
Python
5
star
27

NYC-Event-VPR

Python
5
star
28

Mobile3DPrinting

https://ai4ce.github.io/Mobile3DPrinting/
4
star
29

PointCloudSimulator

Code for simulating 2D point clouds
Python
3
star
30

M3DP-Sim

C++
2
star
31

pyP2Mesh

python wrapper for finding a point's closest point on a triangle mesh
Python
2
star
32

VIP_SelfDrive

Makefile
2
star
33

EgoPAT3Dv2

Python
2
star
34

LUWA

Jupyter Notebook
1
star
35

ai4ce_robot_ROS2_drivers

This repo contains all the ROS2 driver packages modified at AI4CE lab for working with various robots
1
star