• Stars
    star
    338
  • Rank 120,494 (Top 3 %)
  • Language
    Python
  • License
    Other
  • Created about 6 years ago
  • Updated almost 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Geometry-Aware Learning of Maps for Camera Localization (CVPR2018)

License CC BY-NC-SA 4.0 Python 2.7

Geometry-Aware Learning of Maps for Camera Localization

This is the PyTorch implementation of our CVPR 2018 paper

"Geometry-Aware Learning of Maps for Camera Localization" - CVPR 2018 (Spotlight). Samarth Brahmbhatt, Jinwei Gu, Kihwan Kim, James Hays, and Jan Kautz

A four-minute video summary (click below for the video)

mapnet

Citation

If you find this code useful for your research, please cite our paper

@inproceedings{mapnet2018,
  title={Geometry-Aware Learning of Maps for Camera Localization},
  author={Samarth Brahmbhatt and Jinwei Gu and Kihwan Kim and James Hays and Jan Kautz},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2018}
}

Table of Contents

Documentation

Setup

MapNet uses a Conda environment that makes it easy to install all dependencies.

  1. Install miniconda with Python 2.7.

  2. Create the mapnet Conda environment: conda env create -f environment.yml.

  3. Activate the environment: conda activate mapnet_release.

  4. Note that our code has been tested with PyTorch v0.4.1 (the environment.yml file should take care of installing the appropriate version).

Data

We support the 7Scenes and Oxford RobotCar datasets right now. You can also write your own PyTorch dataloader for other datasets and put it in the dataset_loaders directory. Refer to this README file for more details.

The datasets live in the data/deepslam_data directory. We provide skeletons with symlinks to get you started. Let us call your 7Scenes download directory 7SCENES_DIR and your main RobotCar download directory (in which you untar all the downloads from the website) ROBOTCAR_DIR. You will need to make the following symlinks:

cd data/deepslam_data && ln -s 7SCENES_DIR 7Scenes && ln -s ROBOTCAR_DIR RobotCar_download


Special instructions for RobotCar: (only needed for RobotCar data)

  1. Download this fork of the dataset SDK, and run cd scripts && ./make_robotcar_symlinks.sh after editing the ROBOTCAR_SDK_ROOT variable in it appropriately.

  2. For each sequence, you need to download the stereo_centre, vo and gps tar files from the dataset website (more details in this comment).

  3. The directory for each 'scene' (e.g. full) has .txt files defining the train/test split. While training MapNet++, you must put the sequences for self-supervised learning (dataset T in the paper) in the test_split.txt file. The dataloader for the MapNet++ models will use both images and ground-truth pose from sequences in train_split.txt and only images from the sequences in test_split.txt.

  4. To make training faster, we pre-processed the images using scripts/process_robotcar_images.py. This script undistorts the images using the camera models provided by the dataset, and scales them such that the shortest side is 256 pixels.


Running the code

Demo/Inference

The trained models for all experiments presented in the paper can be downloaded here. The inference script is scripts/eval.py. Here are some examples, assuming the models are downloaded in scripts/logs. Please go to the scripts folder to run the commands.

7_Scenes

  • MapNet++ with pose-graph optimization (i.e., MapNet+PGO) on heads:
$ python eval.py --dataset 7Scenes --scene heads --model mapnet++ \
--weights logs/7Scenes_heads_mapnet++_mapnet++_7Scenes/epoch_005.pth.tar \
--config_file configs/pgo_inference_7Scenes.ini --val --pose_graph
Median error in translation = 0.12 m
Median error in rotation    = 8.46 degrees

7Scenes_heads_mapnet+pgo

  • For evaluating on the train split remove the --val flag

  • To save the results to disk without showing them on screen (useful for scripts), add the --output_dir ../results/ flag

  • See this README file for more information on hyper-parameters and which config files to use.

  • MapNet++ on heads:

$ python eval.py --dataset 7Scenes --scene heads --model mapnet++ \
--weights logs/7Scenes_heads_mapnet++_mapnet++_7Scenes/epoch_005.pth.tar \
--config_file configs/mapnet.ini --val
Median error in translation = 0.13 m
Median error in rotation    = 11.13 degrees
  • MapNet on heads:
$ python eval.py --dataset 7Scenes --scene heads --model mapnet \
--weights logs/7Scenes_heads_mapnet_mapnet_learn_beta_learn_gamma/epoch_250.pth.tar \
--config_file configs/mapnet.ini --val
Median error in translation = 0.18 m
Median error in rotation    = 13.33 degrees
  • PoseNet (CVPR2017) on heads:
$ python eval.py --dataset 7Scenes --scene heads --model posenet \
--weights logs/7Scenes_heads_posenet_posenet_learn_beta_logq/epoch_300.pth.tar \
--config_file configs/posenet.ini --val
Median error in translation = 0.19 m
Median error in rotation    = 12.15 degrees

RobotCar

  • MapNet++ with pose-graph optimization on loop:
$ python eval.py --dataset RobotCar --scene loop --model mapnet++ \
--weights logs/RobotCar_loop_mapnet++_mapnet++_RobotCar_learn_beta_learn_gamma_2seq/epoch_005.pth.tar \
--config_file configs/pgo_inference_RobotCar.ini --val --pose_graph
Mean error in translation = 6.74 m
Mean error in rotation    = 2.23 degrees

RobotCar_loop_mapnet+pgo

  • MapNet++ on loop:
$ python eval.py --dataset RobotCar --scene loop --model mapnet++ \
--weights logs/RobotCar_loop_mapnet++_mapnet++_RobotCar_learn_beta_learn_gamma_2seq/epoch_005.pth.tar \
--config_file configs/mapnet.ini --val
Mean error in translation = 6.95 m
Mean error in rotation    = 2.38 degrees
  • MapNet on loop:
$ python eval.py --dataset RobotCar --scene loop --model mapnet \
--weights logs/RobotCar_loop_mapnet_mapnet_learn_beta_learn_gamma/epoch_300.pth.tar \
--config_file configs/mapnet.ini --val
Mean error in translation = 9.84 m
Mean error in rotation    = 3.96 degrees

Train

The executable script is scripts/train.py. Please go to the scripts folder to run these commands. For example:

  • PoseNet on chess from 7Scenes: python train.py --dataset 7Scenes --scene chess --config_file configs/posenet.ini --model posenet --device 0 --learn_beta --learn_gamma

train.png

  • MapNet on chess from 7Scenes: python train.py --dataset 7Scenes --scene chess --config_file configs/mapnet.ini --model mapnet --device 0 --learn_beta --learn_gamma

  • MapNet++ is finetuned on top of a trained MapNet model: python train.py --dataset 7Scenes --checkpoint <trained_mapnet_model.pth.tar> --scene chess --config_file configs/mapnet++_7Scenes.ini --model mapnet++ --device 0 --learn_beta --learn_gamma

For example, we can train MapNet++ model on heads from a pretrained MapNet model:

$ python train.py --dataset 7Scenes \
--checkpoint logs/7Scenes_heads_mapnet_mapnet_learn_beta_learn_gamma/epoch_250.pth.tar \
--scene heads --config_file configs/mapnet++_7Scenes.ini --model mapnet++ \
--device 0 --learn_beta --learn_gamma

For MapNet++ training, you will need visual odometry (VO) data (or other sensory inputs such as noisy GPS measurements). For 7Scenes, we provided the preprocessed VO computed with the DSO method. For RobotCar, we use the provided stereo_vo. If you plan to use your own VO data (especially from a monocular camera) for MapNet++ training, you will need to first align the VO with the world coordinate (for rotation and scale). Please refer to the "Align VO" section below for more detailed instructions.

The meanings of various command-line parameters are documented in scripts/train.py. The values of various hyperparameters are defined in a separate .ini file. We provide some examples in the scripts/configs directory, along with a README file explaining some hyper-parameters.

If you have visdom = yes in the config file, you will need to start a Visdom server for logging the training progress:

python -m visdom.server -env_path=scripts/logs/.


Network Attention Visualization

Calculates the network attention visualizations and saves them in a video

  • For the MapNet model trained on chess in 7Scenes:
$ python plot_activations.py --dataset 7Scenes --scene chess
--weights <filename.pth.tar> --device 1 --val --config_file configs/mapnet.ini
--output_dir ../results/

Check here for an example video of computed network attention of PoseNet vs. MapNet++.


Other Tools

Align VO to the ground truth poses

This has to be done before using VO in MapNet++ training. The executable script is scripts/align_vo_poses.py.

  • For the first sequence from chess in 7Scenes: python align_vo_poses.py --dataset 7Scenes --scene chess --seq 1 --vo_lib dso. Note that alignment for 7Scenes needs to be done separately for each sequence, and so the --seq flag is needed

  • For all 7Scenes you can also use the script align_vo_poses_7scenes.sh The script stores the information at the proper location in data

Mean and stdev pixel statistics across a dataset

This must be calculated before any training. Use the scripts/dataset_mean.py, which also saves the information at the proper location. We provide pre-computed values for RobotCar and 7Scenes.

Calculate pose translation statistics

Calculates the mean and stdev and saves them automatically to appropriate files python calc_pose_stats.py --dataset 7Scenes --scene redkitchen This information is needed to normalize the pose regression targets, so this script must be run before any training. We provide pre-computed values for RobotCar and 7Scenes.

Plot the ground truth and VO poses for debugging

python plot_vo_poses.py --dataset 7Scenes --scene heads --vo_lib dso --val. To save the output instead of displaying on screen, add the --output_dir ../results/ flag

Process RobotCar GPS

The scripts/process_robotcar_gps.py script must be run before using GPS for MapNet++ training. It converts the csv file into a format usable for training.

Demosaic and undistort RobotCar images

This is advisable to do beforehand to speed up training. The scripts/process_robotcar_images.py script will do that and save the output images to a centre_processed directory in the stereo directory. After the script finishes, you must rename this directory to centre so that the dataloader uses these undistorted and demosaiced images.

FAQ

Collection of issues and resolution comments that might be useful:

License

Copyright (C) 2018 NVIDIA Corporation. All rights reserved. Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).

More Repositories

1

instant-ngp

Instant neural graphics primitives: lightning fast NeRF and more
Cuda
15,102
star
2

stylegan

StyleGAN - Official TensorFlow Implementation
Python
13,882
star
3

stylegan2

StyleGAN2 - Official TensorFlow Implementation
Python
10,740
star
4

SPADE

Semantic Image Synthesis with SPADE
Python
7,518
star
5

stylegan3

Official PyTorch implementation of StyleGAN3
Python
6,108
star
6

neuralangelo

Official implementation of "Neuralangelo: High-Fidelity Neural Surface Reconstruction" (CVPR 2023)
Python
4,125
star
7

imaginaire

NVIDIA's Deep Imagination Team's PyTorch Library
Python
3,941
star
8

stylegan2-ada-pytorch

StyleGAN2-ADA - Official PyTorch implementation
Python
3,866
star
9

ffhq-dataset

Flickr-Faces-HQ Dataset (FFHQ)
Python
3,483
star
10

tiny-cuda-nn

Lightning fast C++/CUDA neural network framework
C++
3,286
star
11

eg3d

Python
3,089
star
12

MUNIT

Multimodal Unsupervised Image-to-Image Translation
Python
2,564
star
13

SegFormer

Official PyTorch implementation of SegFormer
Python
2,252
star
14

nvdiffrec

Official code for the CVPR 2022 (oral) paper "Extracting Triangular 3D Models, Materials, and Lighting From Images".
Python
2,019
star
15

few-shot-vid2vid

Pytorch implementation for few-shot photorealistic video-to-video translation.
Python
1,780
star
16

stylegan2-ada

StyleGAN2 with adaptive discriminator augmentation (ADA) - Official TensorFlow implementation
Python
1,778
star
17

FUNIT

Translate images to unseen domains in the test time with few example images.
Python
1,545
star
18

PWC-Net

PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume, CVPR 2018 (Oral)
Python
1,512
star
19

noise2noise

Noise2Noise: Learning Image Restoration without Clean Data - Official TensorFlow implementation of the ICML 2018 paper
Python
1,356
star
20

alias-free-gan

Alias-Free GAN project website and code
1,320
star
21

prismer

The implementation of "Prismer: A Vision-Language Model with Multi-Task Experts".
Python
1,287
star
22

DG-Net

👫 Joint Discriminative and Generative Learning for Person Re-identification. CVPR'19 (Oral) 👫
Python
1,268
star
23

nvdiffrast

Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering
C++
1,137
star
24

edm

Elucidating the Design Space of Diffusion-Based Generative Models (EDM)
Python
1,014
star
25

Deep_Object_Pose

Deep Object Pose Estimation (DOPE) – ROS inference (CoRL 2018)
Python
955
star
26

VoxFormer

Official PyTorch implementation of VoxFormer [CVPR 2023 Highlight]
Python
937
star
27

NVAE

The Official PyTorch Implementation of "NVAE: A Deep Hierarchical Variational Autoencoder" (NeurIPS 2020 spotlight paper)
Python
889
star
28

BundleSDF

[CVPR 2023] BundleSDF: Neural 6-DoF Tracking and 3D Reconstruction of Unknown Objects
Python
842
star
29

ODISE

Official PyTorch implementation of ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models [CVPR 2023 Highlight]
Python
779
star
30

GroupViT

Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.
Python
679
star
31

FasterViT

[ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention
Python
664
star
32

GA3C

Hybrid CPU/GPU implementation of the A3C algorithm for deep reinforcement learning.
Python
641
star
33

denoising-diffusion-gan

Tackling the Generative Learning Trilemma with Denoising Diffusion GANs https://arxiv.org/abs/2112.07804
Python
634
star
34

genvs

610
star
35

sionna

Sionna: An Open-Source Library for Next-Generation Physical Layer Research
Jupyter Notebook
580
star
36

curobo

CUDA Accelerated Robot Library
Python
545
star
37

FB-BEV

Official PyTorch implementation of FB-BEV & FB-OCC - Forward-backward view transformation for vision-centric autonomous driving perception
Python
518
star
38

Dancing2Music

Python
513
star
39

planercnn

PlaneRCNN detects and reconstructs piece-wise planar surfaces from a single RGB image
Python
502
star
40

pacnet

Pixel-Adaptive Convolutional Neural Networks (CVPR '19)
Python
490
star
41

CALM

Python
486
star
42

DeepInversion

Official PyTorch implementation of Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion (CVPR 2020)
Python
474
star
43

EmerNeRF

PyTorch Implementation of EmerNeRF: Emergent Spatial-Temporal Scene Decomposition via Self-Supervision
Python
456
star
44

FAN

Official PyTorch implementation of Fully Attentional Networks
Python
454
star
45

FourCastNet

Initial public release of code, data, and model weights for FourCastNet
Python
421
star
46

GCVit

[ICML 2023] Official PyTorch implementation of Global Context Vision Transformers
Python
414
star
47

intrinsic3d

Intrinsic3D - High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting (ICCV 2017)
C++
411
star
48

nvdiffmodeling

Differentiable rasterization applied to 3D model simplification tasks
Python
404
star
49

flip

A tool for visualizing and communicating the errors in rendered images.
C++
375
star
50

wetectron

Weakly-supervised object detection.
Python
355
star
51

FoundationPose

FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects
JavaScript
349
star
52

nvdiffrecmc

Official code for the NeurIPS 2022 paper "Shape, Light, and Material Decomposition from Images using Monte Carlo Rendering and Denoising".
C
341
star
53

GLAMR

[CVPR 2022 Oral] Official PyTorch Implementation of "GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras”.
Python
329
star
54

LSGM

The Official PyTorch Implementation of "LSGM: Score-based Generative Modeling in Latent Space" (NeurIPS 2021)
Python
326
star
55

ssn_superpixels

Superpixel Sampling Networks (ECCV2018)
Python
323
star
56

DiffiT

Official Repository for DiffiT: Diffusion Vision Transformers for Image Generation
315
star
57

FreeSOLO

FreeSOLO for unsupervised instance segmentation, CVPR 2022
Python
307
star
58

long-video-gan

Official PyTorch implementation of LongVideoGAN
Python
297
star
59

neuralrgbd

Neural RGB→D Sensing: Per-pixel depth and its uncertainty estimation from a monocular RGB video
Python
294
star
60

selfsupervised-denoising

High-Quality Self-Supervised Deep Image Denoising - Official TensorFlow implementation of the NeurIPS 2019 paper
Python
293
star
61

Taylor_pruning

Pruning Neural Networks with Taylor criterion in Pytorch
Python
279
star
62

timeloop

Timeloop performs modeling, mapping and code-generation for tensor algebra workloads on various accelerator architectures.
C++
278
star
63

metfaces-dataset

Python
272
star
64

few_shot_gaze

Pytorch implementation and demo of FAZE: Few-Shot Adaptive Gaze Estimation (ICCV 2019, oral)
Python
272
star
65

splatnet

SPLATNet: Sparse Lattice Networks for Point Cloud Processing (CVPR2018)
Python
268
star
66

MinVIS

Python
261
star
67

edm2

Analyzing and Improving the Training Dynamics of Diffusion Models (EDM2)
Python
261
star
68

contact_graspnet

Efficient 6-DoF Grasp Generation in Cluttered Scenes
Python
260
star
69

CenterPose

Single-Stage Keypoint-based Category-level Object Pose Estimation from an RGB Image (ICRA 2022)
Python
251
star
70

trajdata

A unified interface to many trajectory forecasting datasets.
Python
245
star
71

STEP

STEP: Spatio-Temporal Progressive Learning for Video Action Detection. CVPR'19 (Oral)
Python
244
star
72

matchlib

SystemC/C++ library of commonly-used hardware functions and components for HLS.
C++
235
star
73

sim-web-visualizer

Web Based Visualizer for Simulation Environments
Python
231
star
74

SCOPS

SCOPS: Self-Supervised Co-Part Segmentation (CVPR'19)
Python
221
star
75

UMR

Self-supervised Single-view 3D Reconstruction
Python
221
star
76

DiffRL

[ICLR 2022] Accelerated Policy Learning with Parallel Differentiable Simulation
Python
220
star
77

cule

CuLE: A CUDA port of the Atari Learning Environment (ALE)
C++
216
star
78

SSV

Pytorch implementation of SSV: Self-Supervised Viewpoint Learning from Image Collections (CVPR 2020)
Python
214
star
79

DiffPure

A new adversarial purification method that uses the forward and reverse processes of diffusion models to remove adversarial perturbations.
Python
210
star
80

latentfusion

LatentFusion: End-to-End Differentiable Reconstruction and Rendering for Unseen Object Pose Estimation
Python
197
star
81

I2SB

Python
194
star
82

nvbio

NVBIO is a library of reusable components designed to accelerate bioinformatics applications using CUDA.
C++
193
star
83

6dof-graspnet

Implementation of 6-DoF GraspNet with tensorflow and python. This repo has been tested with python 2.7 and tensorflow 1.12.
Python
186
star
84

NVBit

183
star
85

AFNO-transformer

Adaptive FNO transformer - official Pytorch implementation
Python
174
star
86

UnseenObjectClustering

Learning RGB-D Feature Embeddings for Unseen Object Instance Segmentation
Python
166
star
87

AL-MDN

Official pytorch implementation of Active Learning for deep object detection via probabilistic modeling (ICCV 2021)
Python
159
star
88

fermat

Fermat is a high performance research oriented physically based rendering system, trying to produce beautiful pictures following the mathematician’s principle of least time
C++
158
star
89

PoseCNN-PyTorch

PyTorch implementation of the PoseCNN framework
C
156
star
90

mask-auto-labeler

Python
153
star
91

mimicgen_environments

This code corresponds to simulation environments used as part of the MimicGen project.
Python
153
star
92

Bi3D

Python
150
star
93

RVT

Official Code for RVT: Robotic View Transformer for 3D Object Manipulation
Python
147
star
94

condensa

Programmable Neural Network Compression
Python
146
star
95

traffic-behavior-simulation

Python
145
star
96

learningrigidity

Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation (ECCV 2018)
Python
144
star
97

ocrodeg

document image degradation
Jupyter Notebook
142
star
98

ocropus3

Repository collecting all the submodules for the new PyTorch-based OCR System.
Shell
141
star
99

CGBN

CGBN: CUDA Accelerated Multiple Precision Arithmetic (Big Num) using Cooperative Groups
Cuda
139
star
100

PL4NN

Perceptual Losses for Neural Networks: Caffe implementation of loss layers based on perceptual image quality metrics.
Python
138
star