• Stars
    star
    239
  • Rank 167,865 (Top 4 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 5 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Code for "Self-Supervised 3D Keypoint Learning for Ego-motion Estimation"

Self-Supervised 3D Keypoint Learning for Ego-Motion Estimation

Accepted as Plenary talk at CoRL 2020.

Overview

  • Sparse mono-SfM: A new framework for the simultaneous learning of keypoint detection, matching and 3D lifting by incorporating a differentiable pose estimation module.
  • Multi-view adaptation: A novel adaptation technique that exploits the temporal context in videos to further boost the repeatability and matching performance of the keypoint network.
  • State-of-the-art performance: We integrate the networks into a visual odometry framework, enabling robust and accurate ego-motion estimation results

[Full paper] [YouTube]

Setting up your environment

You need a machine with recent Nvidia drivers and a GPU. We recommend using docker (see nvidia-docker2 instructions) to have a reproducible environment. To setup your environment, type in a terminal (only tested in Ubuntu 18.04):

git clone https://github.com/TRI-ML/KP3D.git
cd KP3D
# if you want to use docker (recommended)
make docker-build

We will list below all commands as if run directly inside our container. To run any of the commands in a container, you can either start the container in interactive mode with make docker-start to land in a shell where you can type those commands, or you can do it in one step:

# single GPU
make docker-run COMMAND="some-command"
# multi-GPU
make docker-run-mpi COMMAND="some-command"

Data

Download the HPatches dataset for evaluation:

cd /data/datasets/kp3d/
wget http://icvl.ee.ic.ac.uk/vbalnt/hpatches/hpatches-sequences-release.tar.gz
tar -xvf hpatches-sequences-release.tar.gz
mv hpatches-sequences-release HPatches

Download the KITTI odometry dataset from here - get the color images and ground truth poses. Unzip the data in /data/datasets/kp3d/KITTI_odometry/.

Pre-trained models:

Download the pre-trained models from here and place them in /data/models/kp3d/.

To replicate our results on the KITTI odometry dataset (Table 1 - Ours), run:

make docker-run-mpi COMMAND="python kp3d/evaluation/evaluate_keypoint_odometry.py --depth_model /data/models/kp3d/depth_resnet.ckpt --keypoint_model /data/models/kp3d/keypoint_resnet.ckpt --dataset_dir /data/datasets/kp3d/KITTI_odometry/dataset/ --output_dir ./pose_output/ --sequence 01 02 06 08 09 10 00 03 04 05 07 --align_trajectory --run_evaluation"

You should get the following results:

Sequence 01 02 06 08 09 10 00 03 04 05 07 Mean Train Mean Test
t_rel 17.60 3.22 1.84 3.05 2.73 5.08 2.73 3.03 2.21 3.53 2.42 5.58 2.79
r_rel 0.62 1.01 0.75 0.73 0.63 0.97 1.09 2.42 1.97 1.18 1.00 0.79 1.53

To replicate our results on the HPatches dataset (Table 4 - KeypointNet), run:

make docker-run COMMAND="python kp3d/evaluation/evaluate_keypoint_patches.py --pretrained_model /data/models/kp3d/keypoint_resnet.ckpt --input /data/datasets/kp3d/HPatches/"

You should get the following results:

Evaluation for (320, 256):

Repeatability Localization C1 C3 C5 MScore
0.686 0.800 0.514 0.867 0.914 0.588

Evaluation for (640, 480):

Repeatability Localization C1 C3 C5 MScore
0.674 0.886 0.526 0.857 0.921 0.535

The numbers deviate slightly from the paper, due to different dependency versions.

Trajectories

Trajectories of DS-DSO on KITTI odometry sequences 00-10: ds_dso_kitti_00_10.zip. We also include the results of our ablative analysis as well as our evaluation of monodepth2.

trajectory seq05 trajectory seq07

License

The source code is released under the MIT license.

Citation

Please use the following citation when referencing our work:

@inproceedings{tang2020kp3d,
  title = {{Self-Supervised 3D Keypoint Learning for Ego-Motion Estimation}},
  author = {Jiexiong Tang and Rares Ambrus and Vitor Guizilini and Sudeep Pillai and Hanme Kim and Patric Jensfelt and Adrien Gaidon},
  booktitle={Conference on Robot Learning (CoRL)},
  year={2020},
}

More Repositories

1

packnet-sfm

TRI-ML Monocular Depth Estimation Repository
Python
1,207
star
2

vidar

Python
549
star
3

DDAD

Dense Depth for Autonomous Driving (DDAD) dataset.
Python
490
star
4

dd3d

Official PyTorch implementation of DD3D: Is Pseudo-Lidar needed for Monocular 3D Object detection? (ICCV 2021), Dennis Park*, Rares Ambrus*, Vitor Guizilini, Jie Li, and Adrien Gaidon.
Python
460
star
5

prismatic-vlms

A flexible and efficient codebase for training visually-conditioned language models (VLMs)
Python
351
star
6

PF-Track

Implementation of PF-Track
Python
189
star
7

KP2D

Python
176
star
8

sdflabel

Official PyTorch implementation of CVPR 2020 oral "Autolabeling 3D Objects With Differentiable Rendering of SDF Shape Priors"
Python
159
star
9

realtime_panoptic

Official PyTorch implementation of CVPR 2020 Oral: Real-Time Panoptic Segmentation from Dense Detections
Python
112
star
10

permatrack

Implementation for Learning to Track with Object Permanence
Python
111
star
11

camviz

Visualization Library
Python
99
star
12

dgp

ML Dataset Governance Policy for Autonomous Vehicle Datasets
Python
93
star
13

VEDet

Python
37
star
14

RAP

This is the official code for the paper RAP: Risk-Aware Prediction for Robust Planning: https://arxiv.org/abs/2210.01368
Python
31
star
15

VOST

Code for the VOST dataset
Python
20
star
16

RAM

Implementation for Object Permanence Emerges in a Random Walk along Memory
Python
18
star
17

road

ROAD: Learning an Implicit Recursive Octree Auto-Decoder to Efficiently Encode 3D Shapes (CoRL 2022)
Python
11
star
18

efm_datasets

TRI-ML Embodied Foundation Datasets
Python
7
star
19

refine

Official PyTorch implementation of the SIGGRAPH 2024 paper "ReFiNe: Recursive Field Networks for Cross-Modal Multi-Scene Representation"
Python
5
star
20

HAICU

4
star
21

binomial_cis

Computation of binomial confidence intervals that achieve exact coverage.
Jupyter Notebook
3
star
22

stochastic_verification

Official repository for the paper "How Generalizable Is My Behavior Cloning Policy? A Statistical Approach to Trustworthy Performance Evaluation"
Python
3
star
23

vlm-evaluation

VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioning
Python
1
star