in Real-time using Deep Multi-view Stereo
TANDEM: Tracking and Dense MappingLukas Koestler1* โโ Nan Yang1,2*,โ โโ Niclas Zeller2,3 โโ Daniel Cremers1,2
*equal contributionโโโ โ corresponding author
1Technical University of Munichโโโ
2Artisense
3Karlsruhe University of Applied Sciences
Conference on Robot Learning (CoRL) 2021, London, UK
arXiv | Video | OpenReview | Project Page
Code and Data
- ๐ฃ C++ code released before Christmas! Please check tandem/.
- ๐ฃ CVA-MVSNet released! Please check cva_mvsnet/.
- ๐ฃ Replica training data released! Please check replica/.
- Minor improvements throughout January. Contributions are highly welcomed!
- Release of the ScanNet-trained model
- Docker image for TANDEM. Contributions are highly welcomed!
Abstract
In this paper, we present TANDEM a real-time monocular tracking and dense mapping framework. For pose estimation, TANDEM performs photometric bundle adjustment based on a sliding window of keyframes. To increase the robustness, we propose a novel tracking front-end that performs dense direct image alignment using depth maps rendered from a global model that is built incrementally from dense depth predictions. To predict the dense depth maps, we propose Cascade View-Aggregation MVSNet (CVA-MVSNet) that utilizes the entire active keyframe window by hierarchically constructing 3D cost volumes with adaptive view aggregation to balance the different stereo baselines between the keyframes. Finally, the predicted depth maps are fused into a consistent global map represented as a truncated signed distance function (TSDF) voxel grid. Our experimental results show that TANDEM outperforms other state-of-the-art traditional and learning-based monocular visual odometry (VO) methods in terms of camera tracking. Moreover, TANDEM shows state-of-the-art real-time 3D reconstruction performance.