• Stars
    star
    233
  • Rank 172,230 (Top 4 %)
  • Language
    Python
  • License
    GNU General Publi...
  • Created about 6 years ago
  • Updated over 5 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

DeepTAM: Deep Tracking and Mapping https://lmb.informatik.uni-freiburg.de/people/zhouh/deeptam/

DeepTAM

DeepTAM is a learnt system for keyframe-based dense camera tracking and mapping.

If you use this code for research, please cite the following paper:

@InProceedings{ZUB18,
    author       = "H. Zhou and B. Ummenhofer and T. Brox",
    title        = "DeepTAM: Deep Tracking and Mapping",
    booktitle    = "European Conference on Computer Vision (ECCV)",
    month        = " ",
    year         = "2018",
    url          = "http://lmb.informatik.uni-freiburg.de/Publications/2018/ZUB18"
}

See the project page for the paper and other material.

Note: Currently we only provide deployment code.

Setup

Current version is tested on Ubuntu 16.04 and with Python3.

# install virtualenv manager (here we use pew)
pip3 install pew

# create virtualenv
pew new deeptam

# switch to virtualenv
pew in deeptam
# install tensorflow 1.4.0 with gpu
pip3 install tensorflow-gpu==1.4.0

# install some python modules
pip3 install minieigen
pip3 install scikit-image
# clone and build lmbspecialops (use branch deeptam)
git clone -b deeptam https://github.com/lmb-freiburg/lmbspecialops.git
LMBSPECIALOPS_DIR=$PWD/lmbspecialops
cd $LMBSPECIALOPS_DIR
mkdir build
cd build
cmake ..
make

# add lmbspecialops to your PYTHON_PATH
pew add $LMBSPECIALOPS_DIR/python
# clone deeptam git (currently only tracking code is available)
git clone https://github.com/lmb-freiburg/deeptam.git
DEEPTAM_DIR=$PWD/deeptam

# add deeptam_tracker to your PYTHON_PATH
pew add $DEEPTAM_DIR/tracking/python

# add deeptam_mapper to your PYTHON_PATH
pew add $DEEPTAM_DIR/mapping/python

Running tracking examples

# download example data
cd  $DEEPTAM_DIR/tracking/data
./download_testdata.sh

# download weights
cd $DEEPTAM_DIR/tracking/weights
./download_weights.sh

The basic example shows how to use DeepTAM to track the camera within one keyframe:

# run a basic example
cd $DEEPTAM_DIR/tracking/examples
python3 example_basic.py

The advanced example shows how to track a video sequence with multiple keyframes:

# run an advanced example
cd $DEEPTAM_DIR/tracking/examples
python3 example_advanced_sequence.py

# or run without visualization for speedup
python3 example_advanced_sequence.py --disable_vis

Running mapping examples

# download weights
cd $DEEPTAM_DIR/mapping/weights
./download_weights.sh

# run the example
cd $DEEPTAM_DIR/mapping/examples
python3 mapping_test_deeptam.py

License

deeptam is under the GNU General Public License v3.0

More Repositories

1

flownet2

FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks
C++
1,004
star
2

hand3d

Network estimating 3D Handpose from single color images
Python
801
star
3

demon

DeMoN: Depth and Motion Network
Python
574
star
4

freihand

A dataset for estimation of hand pose and shape from single color images.
Python
382
star
5

mv3d

Multi-view 3D Models from Single Images with a Convolutional Network
Python
214
star
6

rgbd-pose3d

3D Human Pose Estimation in RGBD Images for Robotic Task Learning
Python
198
star
7

flownet2-docker

Dockerfile and runscripts for FlowNet 2.0 (estimation of optical flow)
Shell
158
star
8

netdef_models

Repository for different network models related to flow/disparity (ECCV 18)
Python
157
star
9

ogn

Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs
C++
155
star
10

orion

ORION: Orientation-boosted Voxel Nets for 3D Object Recognition
MATLAB
111
star
11

what3d

What Do Single-view 3D Reconstruction Networks Learn?
Python
98
star
12

dispnet-flownet-docker

Dockerfile and runscripts for DispNet and FlowNet1 (estimation of disparity and optical flow)
Shell
87
star
13

Unet-Segmentation

The U-Net Segmentation plugin for Fiji (ImageJ)
Java
87
star
14

robustmvd

Repository for the Robust Multi-View Depth Benchmark
Python
74
star
15

contra-hand

Code in conjunction with the publication 'Contrastive Representation Learning for Hand Shape Estimation'
Python
53
star
16

Multimodal-Future-Prediction

The official repository for the CVPR 2019 paper "Overcoming Limitations of Mixture Density Networks: A Sampling and Fitting Framework for Multimodal Future Prediction"
Python
47
star
17

lmbspecialops

A collection of tensorflow ops
C++
46
star
18

FLN-EPN-RPN

This repository contains the source code of the CVPR 2020 paper: "Multimodal Future Localization and Emergence Prediction for Objects in Egocentric View with a Reachability Prior"
Python
32
star
19

flow_rl

Python
28
star
20

netdef-docker

DispNet3, FlowNet3, FlowNetH, SceneFlowNet -- in Docker
Shell
28
star
21

caffe-unet-docker

The U-Net Segmentation server (caffe_unet) for Docker
Shell
27
star
22

Contrastive-Future-Trajectory-Prediction

The official repository of the ICCV paper "On Exposing the Challenging Long Tail in Future Prediction of Traffic Actors"
Python
25
star
23

locov

Localized Vision-Language Matching for Open-vocabulary Object Detection
Python
19
star
24

unsup-car-dataset

Unsupervised Generation of a Viewpoint Annotated Car Dataset from Videos
MATLAB
19
star
25

FreiPose-docker

FreiPose: A Deep Learning Framework for Precise Animal Motion Capture in 3D Spaces
Dockerfile
18
star
26

optical-flow-2d-data-generation

Caffe(v1)-compatible codebase to generate optical flow training data on-the-fly; used for the IJCV 2018 paper "What Makes Good Synthetic Training Data for Learning Disparity and Optical Flow Estimation?" (http://dx.doi.org/10.1007/s11263-018-1082-6)
C++
18
star
27

autodispnet

Code for AutoDispNet (ICCV 2019)
Python
17
star
28

cv-exercises

Python
15
star
29

spr-exercises

Jupyter Notebook
12
star
30

td-or-not-td

Code for the paper "TD or not TD: Analyzing the Role of Temporal Differencing in Deep Reinforcement Learning", Artemij Amiranashvili, Alexey Dosovitskiy, Vladlen Koltun and Thomas Brox, ICLR 2018
Python
12
star
31

sf2se3

Repository for SF2SE3: Clustering Scene Flow into SE(3)-Motions via Proposal and Selection
Python
10
star
32

ovqa

Python
10
star
33

understanding_flow_robustness

Official repository for "Towards Understanding Adversarial Robustness of Optical Flow Networks" (CVPR 2022)
Python
9
star
34

neural-point-cloud-diffusion

Official repository for "Neural Point Cloud Diffusion for Disentangled 3D Shape and Appearance Generation"
Python
9
star
35

ldce

Official repository for "Latent Diffusion Counterfactual Explanations"
Python
9
star
36

PreFAct

Code and Models for the paper "Learning Representations for Predicting Future Activities"
8
star
37

ROS-packages

A collection of ROS packages for LMB software; DispNet(1+3), FlowNet2, etc.
C++
7
star
38

FreiPose

C++
7
star
39

diffusion-for-ood

Official repository for "Diffusion for Out-of-Distribution Detection on Road Scenes and Beyond". Coming soon.
Python
5
star
40

tfutils

tfutils is a set of tools for training networks with tensorflow
Python
5
star
41

FreiCalib

C++
5
star
42

netdef_slim

A python wrapper for tf to ease creation of network definitions.
Python
4
star
43

iRoCS-Toolbox

n-D Image Analysis libraries and tools
C++
4
star
44

rohl

Python
3
star
45

RecordTool

Python
2
star
46

tree-planting

Official repository for "Climate-sensitive Urban Planning Through Optimization of Tree Placements"
Python
2
star
47

ade-ood

Official repo for the ADE-OoD benchmark.
Python
1
star