• Stars
    star
    306
  • Rank 136,456 (Top 3 %)
  • Language
    C++
  • License
    Other
  • Created over 6 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Implementation of online photometric calibration (https://vision.in.tum.de/research/vslam/photometric-calibration)

Online Photometric Calibration

Recent direct visual odometry and SLAM algorithms have demonstrated impressive levels of precision. However, they require a photometric camera calibration in order to achieve competitive results. Hence, the respective algorithm cannot be directly applied to an off-the-shelf-camera or to a video sequence acquired with an unknown camera. In this work we propose a method for online photometric calibration which enables to process auto exposure videos with visual odometry precisions that are on par with those of photometrically calibrated videos. Our algorithm recovers the exposure times of consecutive frames, the camera response function, and the attenuation factors of the sensor irradiance due to vignetting. Gain robust KLT feature tracks are used to obtain scene point correspondences as input to a nonlinear optimization framework. We show that our approach can reliably calibrate arbitrary video sequences by evaluating it on datasets for which full photometric ground truth is available. We further show that our calibration can improve the performance of a state-of-the-art direct visual odometry method that works solely on pixel intensities, calibrating for photometric parameters in an online fashion in realtime. For more details please refer to our paper.

If you use this code in your research, we would appreciate if you cite the respective publication.

Online Photometric Calibration of Auto Exposure Video for Realtime Visual Odometry and SLAM (P. Bergmann, R. Wang, D. Cremers), In IEEE Robotics and Automation Letters (RA-L), volume 3, 2018. [pdf] [video]

For more information on photometric calibration, see https://vision.in.tum.de/research/vslam/photometric-calibration.

Note: This is a preliminary release. You should consider this research code in beta. All interfaces are subject to change.

Install

We support Ubuntu 14.04 and 16.04, and macOS, but it might work on a variety of other platforms that meet the dependency requirements.

Dependencies

The main dependencies are cmake 3.2 or later, a C++11 compiler, and OpenCV 2.4.

Ubuntu 14.04

On Ubuntu 14.04 you need to get a more recent version of cmake.

sudo add-apt-repository ppa:george-edison55/cmake-3.x

Now continue to install dependencies like for Ubuntu 16.04.

Ubuntu 16.04

Required:

sudo apt-get update
sudo apt-get install \
    build-essential \
    g++ \
    cmake \
    libopencv-dev

Optional:

CCache can help you speed up repeated builds.

Note: You need at least cmake version 3.4 for ccache to work automatically.

sudo apt-get install ccache

macOS

We assume you have installed Homebrew.

Required:

brew install cmake opencv@2

Optional:

brew install ccache

Compile

Start in the package directory.

mkdir build
cd build
cmake ..
make -j4

Optionally you can install the built libraries and executables.

sudo make install

Usage

Online calibration

Example usage:

Download sequence of the TUMmono VO dataset.

SEQ=30
wget http://vision.in.tum.de/mono/dataset/sequence_$SEQ.zip
unzip sequence_$SEQ.zip
unzip -d sequence_$SEQ/images sequence_$SEQ/images.zip

Run online calibration.

build/bin/online_pcalib_demo -i sequence_$SEQ/images --exposure-gt-file sequence_$SEQ/times.txt

Note: Currently the implementation is not suitable for fisheye-lenses with black borders around the image, which includes some of the TUMmono VO dataset sequences as well as the TUM VI dataset sequences.

Batch calibration

Online calibration runs the code in a multithreaded way in parallel on the CPU. If tracking and backend optimization should be performed sequentially and real time performance is not required, the system can be run in batch calibration mode. For running in batch mode, simply add the command line option

  --calibration-mode batch

For batch calibration you might want to use the exposure times from the optimization backend rather than the rapidly estimated exposure times from the frontend. In order to extract more keyframes to the backend optimizer, the run_settings parameters have to be adjusted.

These parameters can be changed by manually setting:

  --nr-active-frames INT      Maximum number of frames to be stored in the database.
  --keyframe-spacing INT      Number of frames that keyframes are apart in the backend optimizer.
  --min-keyframes-valid INT   Minimum number of frames a feature has to be tracked to be considered for optimization.

Command line options

Photometric Calibration
Usage: online_pcalib_demo [OPTIONS]

Options:
  -h,--help                   Print this help message and exit
  -i,--image-folder TEXT=images
                              Folder with image files to read.
  --start-image-index INT=0   Start reading from this image index.
  --end-image-index INT=-1    Stop reading at this image index.
  --image-width INT=640       Resize image to this width.
  --image-height INT=480      Resize image to this height.
  --exposure-gt-file TEXT=times.txt
                              Textfile containing ground truth exposure times for each frame for visualization.
  --calibration-mode TEXT=online
                              Choose 'online' or 'batch'
  --nr-active-frames INT=200  Maximum number of frames to be stored in the database.
  --keyframe-spacing INT=15   Number of frames that keyframes are apart in the backend optimizer.
  --min-keyframes-valid INT=3 Minimum number of frames a feature has to be tracked to be considered for optimization.

License

This project was originally developed at the TUM computer vision group in 2017 by Paul Bergmann.

It is currently maintained by Paul Bermann, Rui Wang and Nikolaus Demmel. See AUTHROS.txt for a list of contributors.

This project is available under a BSD 3-Clause license. See LICENSE.txt

Among others, we make use of the following libraries:

More Repositories

1

lsd_slam

LSD-SLAM
C++
2,486
star
2

tandem

[CoRL 21'] TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo
C++
911
star
3

LDSO

DSO with SIM(3) pose graph optimization and loop closure
C++
653
star
4

dvo_slam

Dense Visual Odometry and SLAM
C++
607
star
5

fastfusion

Volumetric 3D Mapping in Real-Time on a CPU
C++
543
star
6

mono_dataset_code

Code for Monocular Visual Odometry Dataset - https://vision.cs.tum.edu/data/datasets/mono-dataset
C++
261
star
7

tum_ardrone

Repository for the tum_ardrone ROS package, implementing autonomous flight with PTAM-based visual navigation for the Parrot AR.Drone.
C++
221
star
8

dvo

Dense Visual Odometry
C++
148
star
9

fusenet

This repository is the official release of the code for the following paper "FuseNet: Incorporating Depth into Semantic Segmentation via Fusion-based CNN Architecture" which is published at the 13th Asian Conference on Computer Vision (ACCV 2016).
C++
126
star
10

pnec

[CVPR 2022] README.md The Probabilistic Normal Epipolar Constraint for Frame-To-Frame Rotation Optimization under Uncertain Feature Positions
C++
117
star
11

captcha_recognition

Python
71
star
12

intrinsic-neural-fields

[ECCV '22] Intrinsic Neural Fields: Learning Functions on Manifolds
Jupyter Notebook
66
star
13

dbatk

Distributed Bundle Adjustment Toolkit
59
star
14

fastms

Real-Time Minimization of the Piecewise Smooth Mumford-Shah Functional
C++
57
star
15

ardrone_autonomy

This is a slightly modified version of the official ardrone_autonomy package, which You can find here: https://github.com/AutonomyLab/ardrone_autonomy
C
53
star
16

learn_prox_ops

Implementation of "Learning Proximal Operators: Using Denoising Networks for Regularizing Inverse Imaging Problems"
Python
43
star
17

tum_simulator

C++
40
star
18

prost

A fast and flexible convex optimization framework based on proximal splitting
C++
35
star
19

afs

Automatic Feature Selection
C++
31
star
20

rgbd_scribble_benchmark

RGB-D Scribble-based Segmentation Benchmark
Python
26
star
21

autonavx_ardrone

Code for AR.Drone Exercises
C++
24
star
22

autonavx_web

interactive exercises for AUTONAVx course
JavaScript
24
star
23

sublabel_relax

Code for sublabel-accurate multi-labeling papers (published at CVPR '16, ECCV '16)
C++
20
star
24

csd_lmnn

Combined Spectral Descriptors and LMNN for non-rigid 3D shape retrieval
MATLAB
19
star
25

rgbd_demo

Simple ROS demo for processing RGB-D data
C++
17
star
26

mem

Masked Event Modeling: Self-Supervised Pretraining for Event Cameras (WACV '24)
Python
15
star
27

kfusion_ros

ROS integration for kfusion
C++
11
star
28

openni2_camera

OpenNI2 camera node for ROS
C++
9
star
29

articulation

articulation models
C++
6
star
30

nnascg

Source code for experiments in paper "Deriving Neural Network Design and Learning from the Probabilistic Framework of Chain Graphs" by Yuesong Shen and Daniel Cremers.
Python
4
star
31

lgm

Implementation of Layered Graphical Model with demo code
Python
4
star
32

dca

Source code for the NeurIPS 2022 paper "Deep Combinatorial Aggregation"
Python
4
star
33

flbo

2
star
34

hierahyp

1
star