• Stars
    star
    235
  • Rank 165,799 (Top 4 %)
  • Language
    C++
  • License
    Other
  • Created about 5 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals

ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals

Description

The programs allows to perform RGB-D SLAM in dynamic environments. We employ an efficient direct tracking on the truncated signed distance function (TSDF) and leverage color information encoded in the TSDF to estimate the pose of the sensor. The TSDF is efficiently represented using voxel hashing, with most computations parallelized on a GPU. For detecting dynamics, we exploit the residuals obtained after an initial registration.

Check out the video:

ReFusion Video

For further details, see the paper: "ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals".

WARNING: The provided code is not optimized, nor in an easy-to-read shape. It is provided "as is", as a prototype implementation of our paper. Use it at your own risk. Moreover, compared to the paper, this implementation lacks the features that make it able to deal with invalid measurements. Therefore, it will not produce good models for the TUM RGB-D Benchmark scenes. To test it, please use our Bonn RGB-D Dynamic Dataset

Key contributors

Emanuele Palazzolo ([email protected])

Related publications

If you use this code for your research, please cite:

Emanuele Palazzolo, Jens Behley, Philipp Lottes, Philippe Giguรจre, Cyrill Stachniss, "ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals", arXiv, 2019 PDF

BibTeX:

@article{palazzolo2019arxiv,
author = {E. Palazzolo and J. Behley and P. Lottes and P. Gigu\`ere and C. Stachniss},
title = {{ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals}},
journal = {arXiv},
year = {2019},
url = {https://arxiv.org/abs/1905.02082}
}

Dependencies

  • catkin
  • Eigen = 3.3
  • OpenCV >= 2.4
  • CUDA >= 9.0
  • (optional) Doxygen >= 1.8.11

Installation guide

Ubuntu 16.04

First, install the necessary dependencies:

  • Install CUDA.
  • Install the rest of the dependencies:
sudo apt install git libeigen3-dev libopencv-dev catkin
sudo apt install python-pip
sudo pip install catkin-tools
  • Finally, if you also want to build the documentation you need Doxygen installed (tested only with Doxygen 1.8.11):
sudo apt install doxygen

If you do not have a catkin workspace already, create one:

cd
mkdir catkin_ws
cd catkin_ws
mkdir src
catkin init
cd src
git clone https://github.com/ros/catkin.git

Clone the repository in your catkin workspace:

cd ~/catkin_ws/src
git clone https://github.com/PRBonn/refusion.git

Then, build the project:

catkin build refusion

Now the project root directory (e.g. ~/catkin_ws/src/refusion) should contain a bin directory containing an example binary and, if Doxygen is installed, a docs directory containing the documentation.

Ubuntu 18.04

The software is not compatible with the version of Eigen shipped in Ubuntu 18.04. It is necessary to install a newer version and modify the CMakeLists.txt file to use it:

  • Get Eigen v3.3.7:
wget http://bitbucket.org/eigen/eigen/get/3.3.7.tar.bz2
  • Install the Eigen libraries in /usr/local
cd eigen && cmake && sudo make install
  • Change line 9 of CMakeLists.txt from
find_package(Eigen3 REQUIRED)

to

find_package(Eigen3 REQUIRED PATHS /usr/local/include/)
  • Follow the installation guide for Ubuntu 16.04.

How to use it

The Tracker class is the core of the program. Its constructor requires the options for the TSDF representation, the options for the tracker, and the intrinsic parameters of the RGB-D sensor. Use the AddScan member function to compute the pose of a scan and add it to the map. To visualize the result, the GetCurrentPose member function returns the current pose of the sensor, and the GenerateRgb member functions allows to create a virtual RGB image from the model. Furthermore, the ExtractMesh member fuction allows to create a mesh from the current model and save it as an obj file.

Refer to the documentation and to the source code for further details. An example that illustrates how to use the library is located in src/example/example.cpp.

Examples / datafiles

After the build process, the bin directory in the project root directory (e.g. ~/catkin_ws/src/refusion) will contain an example binary. To run it execute from the command line:

cd ~/catkin_ws/src/refusion/bin
./refusion_example DATASET_PATH

where DATASET_PATH is the path to the directory of a dataset in the format of the TUM RGB-D Benchmark (e.g. ~/rgbd_bonn_dataset/rgbd_bonn_crowd3). Some example datasets can be found here.

Note that the directory of the dataset should contain a file called associated.txt, containing the association between RGB and Depth images. Such file can be created using this Python tool:

python2 associate.py depth.txt rgb.txt > associated.txt

License

This project is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. See the LICENSE.txt file for details.

Acknowledgments

This work has partly been supported by the DFG under the grant number FOR 1505: Mapping on Demand, under the grant number BE 5996/1-1, and under German's Excellence Strategy EXC 2070-390732324: PhenoRob.

More Repositories

1

kiss-icp

A LiDAR odometry pipeline that just works
Python
1,380
star
2

depth_clustering

๐Ÿš• Fast and robust clustering of point clouds generated with a Velodyne sensor.
C++
1,105
star
3

lidar-bonnetal

Semantic and Instance Segmentation of LiDAR point clouds for autonomous driving
Python
883
star
4

semantic_suma

SuMa++: Efficient LiDAR-based Semantic SLAM (Chen et al IROS 2019)
C++
840
star
5

semantic-kitti-api

SemanticKITTI API for visualizing dataset, processing data, and evaluating results.
Python
723
star
6

OverlapNet

OverlapNet - Loop Closing for 3D LiDAR-based SLAM (chen2020rss)
Python
617
star
7

LiDAR-MOS

(LMNet) Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data (RAL/IROS 2021)
Python
529
star
8

vdbfusion

C++/Python Sparse Volumetric TSDF Fusion
C++
428
star
9

SHINE_mapping

๐ŸŒŸ SHINE-Mapping: Large-Scale 3D Mapping Using Sparse Hierarchical Implicit Neural Representations (ICRA' 23)
Python
424
star
10

puma

Poisson Surface Reconstruction for LiDAR Odometry and Mapping
Python
395
star
11

bonnet

Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.
Python
320
star
12

range-mcl

Range Image-based LiDAR Localization for Autonomous Vehicles Using Mesh Maps (chen2021icra)
Python
263
star
13

overlap_localization

chen2020iros: Learning an Overlap-based Observation Model for 3D LiDAR Localization.
Python
259
star
14

rangenet_lib

Inference module for RangeNet++ (milioto2019iros, chen2019iros)
C++
238
star
15

bonnetal

Bonnet and then some! Deep Learning Framework for various Image Recognition Tasks. Photogrammetry and Robotics Lab, University of Bonn
Python
226
star
16

PIN_SLAM

๐Ÿ“PIN-SLAM: LiDAR SLAM Using a Point-Based Implicit Neural Representation for Achieving Global Map Consistency
Python
202
star
17

4DMOS

Receding Moving Object Segmentation in 3D LiDAR Data Using Sparse 4D Convolutions (RAL 2022)
Python
201
star
18

visual-crop-row-navigation

This is a visual-servoing based robot navigation framework tailored for navigating in row-crop fields. It uses the images from two on-board cameras and exploits the regular crop-row structure present in the fields for navigation, without performing explicit localization or mapping. It allows the robot to follow the crop-rows accurately and handles the switch to the next row seamlessly within the same framework.
C++
174
star
19

pole-localization

Online Range Image-based Pole Extractor for Long-term LiDAR Localization in Urban Environments
Python
158
star
20

online_place_recognition

Graph-based image sequences matching for the visual place recognition in changing environments.
C++
148
star
21

LiDiff

Python
146
star
22

agribot

The mission of the project is to build an agricultural robot (AgriBot) from scratch with the aim of serving as a data-recording platform in fields. For further information about the design and purpose of the robot, please follow the About the AgriBot Project page
C++
134
star
23

MapClosures

Effectively Detecting Loop Closures using Point Cloud Density Maps
Python
128
star
24

point-cloud-prediction

Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks
Python
125
star
25

make_it_dense

Make it Dense: Self-Supervised Geometric Scan Completion of Sparse 3D LiDAR Scans in Large Outdoor Environments
Python
124
star
26

ir-mcl

IR-MCL: Implicit Representation-Based Online Global Localization https://arxiv.org/abs/2210.03113
Python
118
star
27

MutiverseOdometry

Code for Simple But Effective Redundant Odometry for Autonomous Vehicles
C++
110
star
28

vpr_relocalization

The framework for visual place recognition in changing enviroments. Matches two sequence of images of arbitrary trajectory overlap.
C++
102
star
29

TARL

TARL: Temporal Consistent 3D LiDAR Representation Learning for Semantic Perception in Autonomous Driving
Python
97
star
30

lidar-visualizer

A LiDAR visualization tool for all your datasets
Python
96
star
31

LocNDF

LocNDF: Neural Distance Field Mapping for Robot Localization
Python
96
star
32

deep-point-map-compression

Python
91
star
33

segcontrast

Python
87
star
34

auto-mos

Automatic Labeling to Generate Training Data for Online LiDAR-based Moving Object Segmentation
Python
85
star
35

3DUIS

Python
77
star
36

lidar_transfer

Code for Langer et al. "Domain Transfer for Semantic Segmentation of LiDAR Data using Deep Neural Networks", IROS, 2020.
Python
70
star
37

hsmcl

C++
60
star
38

SIMP

Python
59
star
39

descriptor-dr

[ICRA 2023] Learning-Based Dimensionality Reduction for Computing Compact and Effective Local Feature Descriptors
Python
58
star
40

extrinsic_calibration

Motion Based Multi-Sensor Extrinsic Calibration
Python
57
star
41

vdbfusion_ros

ROS1 Wrapper for VDBFusion https://github.com/PRBonn/vdbfusion
C++
55
star
42

DCPCR

DCPCR: Deep Compressed Point Cloud Registration in Large-Scale Outdoor Environments
Python
54
star
43

HortiMapping

๐Ÿซ‘ Panoptic Mapping with Fruit Completion and Pose Estimation for Horticultural Robots (IROS' 23)
Python
48
star
44

fast_change_detection

Fast Image-Based Geometric Change Detection Given a 3D Model
C++
44
star
45

contrastive_association

Contrastive Instance Association for 4D Panoptic Segmentation using Sequences of 3D LiDAR Scans
Python
43
star
46

retriever

Point Cloud-based Place Recognition in Compressed Map
Python
38
star
47

4d_plant_registration

Python
38
star
48

tmcl

Text Guided MCL
C++
34
star
49

MaskPLS

Mask-Based Panoptic LiDAR Segmentation for Autonomous Driving, RA-L, 2023
Python
32
star
50

dynamic-point-removal

Static Map Generation from 3D LiDAR Point Clouds Exploiting Ground Segmentation
Python
31
star
51

manifold_python

Python bindings for https://github.com/hjwdzh/Manifold
C++
29
star
52

PS-res-excite

Python
26
star
53

kppr

KPPR: Exploiting Momentum Contrast for Point Cloud-Based Place Recognition
Python
26
star
54

geometrical_stem_detection

Code for fast and accurate geometrical plant stem detection
C++
24
star
55

goPro-meta

App to sample images from goPro Hero 5 video and syncronize sensor frames to them. Output is yaml file and extracted images.
C++
23
star
56

pybonirob

Set of tools to access bonirob datasets in Python
Python
23
star
57

phenobench-baselines

Baselines of the PhenoBench Dataset
Python
18
star
58

voxblox_pybind

Python bindings for the Voxblox library
C++
18
star
59

PartiallyObservedInverseGames.jl

An inverse game solver for inferring objectives from noise-corrupted partial state observations of non-cooperative multi-agent interactions.
Julia
18
star
60

catkin_tools_fetch

๐Ÿ• "fetch" and "update" dependencies of projects in your catkin workspace with a new verb "dependencies" for catkin_tools
Python
16
star
61

nuscenes2kitti

Python
16
star
62

plants_temporal_matcher

This system can perform 3D point-to-point associations between plants' point clouds acquired in different session even in presence of highly repetitive structures and drastic changes.
Python
12
star
63

StyleGenForLabels

StyleGAN-based generation of labels for crop-weed segmentation
Python
11
star
64

ipb_homework_checker

โœ”๏ธ A generic homework checker that we use to automatically check students homework
Python
11
star
65

leaf_mesher

Precise 3D Reconstruction of Plants from UAV Imagery Combining Bundle Adjustment and Template Matching
9
star
66

HAPT

Python
9
star
67

sigf

Image Matching for Crop Fields Using Similarity Invariant Geometric Feature
MATLAB
8
star
68

DG-CWS

Towards Domain Generalization in Crop and Weed Segmentation for Precision Farming Robots
Python
7
star
69

agri-pretraining

Python
6
star
70

leaf-plant-instance-segmentation

In-Field Phenotyping Based on Crop Leaf and Plant Instance Segmentation
Python
5
star
71

MinkowskiPanoptic

Panoptic segmentation baseline implemented based on the MinkowskiEngine library
Python
5
star
72

vdb_to_numpy

Tool to convert VDB grids to numpy arrays.
Jupyter Notebook
4
star
73

g2o_catkin

:octocat: G2O meets catkin
CMake
3
star
74

ipb_workspace

An empty default workspace for development inside IPB lab
3
star
75

plant_pcd_segmenter

High Precision Leaf Instance Segmentation for Phenotyping in Point Clouds Obtained Under Real Field Conditions
2
star
76

Unsupervised-Pre-Training-for-3D-Leaf-Instance-Segmentation

Official repository of Unsupervised Pre-Training for 3D Leaf Instance Segmentation by Roggiolani et al.
Python
1
star