• Stars
    star
    148
  • Rank 249,983 (Top 5 %)
  • Language
    C++
  • Created almost 12 years ago
  • Updated over 11 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Dense Visual Odometry

Dense Visual Odometry (dvo)

These packages provide an implementation of the rigid body motion estimation of an RGB-D camera from consecutive images.

Installation

Checkout the branch for your ROS version into a folder in your ROS_PACKAGE_PATH and build the packages with rosmake.

  • ROS Fuerte:

    git clone -b fuerte git://github.com/tum-vision/dvo.git
    rosmake dvo_core dvo_ros dvo_benchmark
  • ROS Electric:

    You need to install perception_pcl_unstable with PCL version 1.5+.

    git clone -b electric git://github.com/tum-vision/dvo.git
    rosmake dvo_core dvo_ros dvo_benchmark

Usage

Estimating the camera trajectory from an RGB-D image stream:

  • Start the OpenNI camera driver: roslaunch openni_launch openni.launch
  • Start the dvo camera_tracker node: rosrun dvo_ros camera_tracker
  • Start dynamic_reconfigure GUI
    • In /camera/driver enable depth_registration on
    • In /camera_tracker enable reconstruction, use_weighting, run_dense_tracking, and use_dense_tracking_estimate

If everything works, the stdout of the camera_tracker node should show [ WARN] [1355131430.132265592]: RGB image size has changed, resetting tracker! and the camera pose is published on the /rgbd/pose topic. You can restart the camera motion estimation by disabling and enabling the run_dense_tracking option.

For visualization:

  • Start RVIZ
  • Set the Target Frame to /world
  • Add an Interactive Marker display and set its Update Topic to /dvo_vis/update
  • Add a PointCloud2 display and set its Topic to /dvo_vis/cloud

The red camera shows the current camera position. The blue camera displays the initial camera position.

Publications

The following publications describe the approach:

  • Robust Odometry Estimation for RGB-D Cameras (C. Kerl, J. Sturm, D. Cremers), In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA), 2013
  • Real-Time Visual Odometry from Dense RGB-D Images (F. Steinbruecker, J. Sturm, D. Cremers), In Workshop on Live Dense Reconstruction with Moving Cameras at the Intl. Conf. on Computer Vision (ICCV), 2011.

License

The packages dvo_core, dvo_ros, and dvo_benchmark are licensed under the GNU General Public License Version 3 (GPLv3), see http://www.gnu.org/licenses/gpl.html.

The package sophus is licensed under the MIT License, see http://opensource.org/licenses/MIT.

More Repositories

1

lsd_slam

LSD-SLAM
C++
2,486
star
2

tandem

[CoRL 21'] TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo
C++
911
star
3

LDSO

DSO with SIM(3) pose graph optimization and loop closure
C++
653
star
4

dvo_slam

Dense Visual Odometry and SLAM
C++
607
star
5

fastfusion

Volumetric 3D Mapping in Real-Time on a CPU
C++
543
star
6

online_photometric_calibration

Implementation of online photometric calibration (https://vision.in.tum.de/research/vslam/photometric-calibration)
C++
306
star
7

mono_dataset_code

Code for Monocular Visual Odometry Dataset - https://vision.cs.tum.edu/data/datasets/mono-dataset
C++
261
star
8

tum_ardrone

Repository for the tum_ardrone ROS package, implementing autonomous flight with PTAM-based visual navigation for the Parrot AR.Drone.
C++
221
star
9

fusenet

This repository is the official release of the code for the following paper "FuseNet: Incorporating Depth into Semantic Segmentation via Fusion-based CNN Architecture" which is published at the 13th Asian Conference on Computer Vision (ACCV 2016).
C++
126
star
10

pnec

[CVPR 2022] README.md The Probabilistic Normal Epipolar Constraint for Frame-To-Frame Rotation Optimization under Uncertain Feature Positions
C++
117
star
11

captcha_recognition

Python
71
star
12

intrinsic-neural-fields

[ECCV '22] Intrinsic Neural Fields: Learning Functions on Manifolds
Jupyter Notebook
66
star
13

dbatk

Distributed Bundle Adjustment Toolkit
59
star
14

fastms

Real-Time Minimization of the Piecewise Smooth Mumford-Shah Functional
C++
57
star
15

ardrone_autonomy

This is a slightly modified version of the official ardrone_autonomy package, which You can find here: https://github.com/AutonomyLab/ardrone_autonomy
C
53
star
16

learn_prox_ops

Implementation of "Learning Proximal Operators: Using Denoising Networks for Regularizing Inverse Imaging Problems"
Python
43
star
17

tum_simulator

C++
40
star
18

prost

A fast and flexible convex optimization framework based on proximal splitting
C++
35
star
19

afs

Automatic Feature Selection
C++
31
star
20

rgbd_scribble_benchmark

RGB-D Scribble-based Segmentation Benchmark
Python
26
star
21

autonavx_ardrone

Code for AR.Drone Exercises
C++
24
star
22

autonavx_web

interactive exercises for AUTONAVx course
JavaScript
24
star
23

sublabel_relax

Code for sublabel-accurate multi-labeling papers (published at CVPR '16, ECCV '16)
C++
20
star
24

csd_lmnn

Combined Spectral Descriptors and LMNN for non-rigid 3D shape retrieval
MATLAB
19
star
25

rgbd_demo

Simple ROS demo for processing RGB-D data
C++
17
star
26

mem

Masked Event Modeling: Self-Supervised Pretraining for Event Cameras (WACV '24)
Python
15
star
27

kfusion_ros

ROS integration for kfusion
C++
11
star
28

openni2_camera

OpenNI2 camera node for ROS
C++
9
star
29

articulation

articulation models
C++
6
star
30

nnascg

Source code for experiments in paper "Deriving Neural Network Design and Learning from the Probabilistic Framework of Chain Graphs" by Yuesong Shen and Daniel Cremers.
Python
4
star
31

lgm

Implementation of Layered Graphical Model with demo code
Python
4
star
32

dca

Source code for the NeurIPS 2022 paper "Deep Combinatorial Aggregation"
Python
4
star
33

flbo

2
star
34

hierahyp

1
star