• Stars
    star
    745
  • Rank 60,881 (Top 2 %)
  • Language
    JavaScript
  • License
    Apache License 2.0
  • Created over 3 years ago
  • Updated about 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Back to the Feature: Learning Robust Camera Localization from Pixels to Pose (CVPR 2021)

Camera Localization from Pixels to Pose

We introduce PixLoc, a neural network that localizes a given image via direct feature alignment with a 3D model of the environment. PixLoc is trained end-to-end and is interpretable, accurate, and generalizes to new scenes and across domains, e.g. from outdoors to indoors. It is described in our paper:

Installation

PixLoc is built with Python >=3.6 and PyTorch. The package pixloc includes code for both training and evaluation. Installing the package locally also installs the minimal dependencies listed in requirements.txt:

git clone https://github.com/cvg/pixloc/
cd pixloc/
pip install -e .

Generating visualizations and animations requires extra dependencies that can be installed with:

pip install -e .[extra]

Paths to the datasets and to training and evaluation outputs are defined in pixloc/settings.py. The default structure is as follows:

.
โ”œโ”€โ”€ datasets     # public datasets
โ””โ”€โ”€ outputs
    โ”œโ”€โ”€ training # checkpoints and training logs
    โ”œโ”€โ”€ hloc     # 3D models and retrieval for localization
    โ””โ”€โ”€ results  # outputs of the evaluation

Visualizations

Demo

Have a look at the Jupyter notebook demo.ipynb to localize an image and animate the predictions in 2D and 3D. This requires downloading the pre-trained weights and the data for either the Aachen Day-Night or Extended CMU Seasons datasets using:

python -m pixloc.download --select checkpoints Aachen CMU --CMU_slices 2


3D viewer in the demo notebook.

3D animations

You can also check out our cool 3D viewer by launching the webserver with python3 viewer/server.py and visiting http://localhost:8000/viewer/viewer.html

Prediction confidences

The notebook visualize_confidences.ipynb shows how to visualize the confidences of the predictions over image sequences and turn them into videos.

Datasets

The codebase can evaluate PixLoc on the following datasets: 7Scenes, Cambridge Landmarks, Aachen Day-Night, Extended CMU Seasons, and RobotCar Seasons. Running the evaluation requires to download the following assets:

  • The given dataset;
  • Sparse 3D Structure-from-Motion point clouds and results of the image retrieval, both generated with our toolbox hloc and hosted here;
  • Weights of the model trained on CMU or MegaDepth, hosted here.

We provide a convenient script to download all assets for one or multiple datasets using:

python -m pixloc.download --select [7Scenes|Cambridge|Aachen|CMU|RobotCar|checkpoints]

(see --help for additional arguments like --CMU_slices)

Evaluation

To perform the localization on all queries of one of the supported datasets, simply launch the corresponding run script:

python -m pixloc.run_[7Scenes|Cambridge|Aachen|CMU|RobotCar]  # choose one

Optional flags:

  • --results path_to_output_poses defaults to outputs/results/pixloc_[dataset_name].txt
  • --from_poses to refine the poses estimated by hloc rather than starting from reference poses
  • --inlier_ranking to run the oracle baseline using inliers counts of hloc
  • --scenes to select a subset of the scenes of the 7Scenes and Cambridge Landmarks datasets
  • any configuration entry can be overwritten from the command line thanks to OmegaConf. For example, to increase the number of reference images, add refinement.num_dbs=5.

This displays the evaluation metrics for 7Scenes and Cambridge, while the other datasets require uploading the poses to the evaluation server hosted at visuallocalization.net.

Training

Data preparation

[Click to expand]

The 3D point clouds,ย camera poses, and intrinsic parameters are preprocessed together to allow for fast data loading during training. These files are generated using the scripts pixloc/pixlib/preprocess_[cmu|megadepth].py. Such data is also hosted here and can be download via:

python -m pixloc.download --select CMU MegaDepth --training

This also downloads the training split of the CMU dataset. The undistorted MegaDepth data (images) can be downloaded from the D2-Net repository.

Training experiment

[Click to expand]

The training framework and detailed usage instructions are described at pixloc/pixlib/. The training experiments are defined by configuration files for which examples are given at pixloc/pixlib/configs/. For example, the following command trains PixLoc on the CMU dataset:

python -m pixloc.pixlib.train pixloc_cmu_reproduce \
		--conf pixloc/pixlib/configs/train_pixloc_cmu.yaml
  • To track the loss and evaluation metrics throughout training, we use Tensorboard:
tensorboard --logdir outputs/training/

Once the validation loss has saturated (around 20k-40k iterations), the training can be interrupted with Ctrl+C. All training experiments were conducted with a single RTX 2080 Ti NVIDIA GPU, but the code supports multi-GPU training for faster convergence.

  • To investigate the two-view predictions on the validation splits, check out the notebooks training_CMU.ipynb and training_MegaDepth.ipynb.

  • To evaluate the localization using a newly trained model, simply add the name of your training experiment to the evaluation command, such as:

python -m pixloc.run_CMU.py experiment=experiment_name

Running PixLoc on your own data

[Click to expand]
  1. Localizing requires calibrated and posed reference images as well as calibrated query images.
  2. We also need a sparse SfM point clouds and a list of image pairs obtained with image retrieval. You can easily generate them using our toolbox hloc.
  3. Taking the pixloc/run_Aachen.py as a template, we can copy the file structure of the Aachen dataset and/or adjust the variable default_paths, which stores local subpaths from DATA_PATH and LOC_PATH (defined in pixloc/settings.py).

Extending PixLoc

[Click to expand]

Goodies

Viewer

[Click to expand]

We provide in viewer/ a simple web-based visualizer built with three.js. Quantities of interest (3D points, 2D projections, camera trajectories) are first written to a JSON file and then loaded in the front-end. The trajectory can be animated and individual frames captured to generate a video.

Geometry objects

[Click to expand]

We provide in pixloc/pixlib/geometry/wrappers.py PyTorch objects for representing SE(3) Poses and Camera models with lens distortion. With a torch.Tensor-like interface, these objects support batching, GPU computation, backpropagation, and operations over 3D and 2D points:

from pixloc.pixlib.geometry import Pose, Camera
R    # rotation matrix with shape (B,3,3)
t    # translation vector with shape (B,3)
p3D  # 3D points with shape (B,N,3)

T_w2c = Pose.from_Rt(R, t)
T_w2c = T_w2c.cuda()   # behaves like a torch.Tensor
p3D_c = T_w2c * p3D    # transform points
T_A2C = T_B2C @ T_A2B  # chain Pose objects

cam1 = Camera.from_colmap(dict)     # from a COLMAP dict
cam = torch.stack([cam1, cam2])     # batch Camera objects
p2D, mask = cam.world2image(p3D_c)  # project and undistort
J, mask = cam.J_world2image(p3D_c)  # Jacobian of the projection

Implementation of GN-Net

[Click to expand]

We provide in pixloc/pixlib/models/gnnet.py a clean implementation of the Gauss-Newton Network introduced by Von Stumberg et al., along with a configuration file to train it on CMU. At inference time, we can run pose estimation with our classical LM optimizer.

Disclaimer

Since the publication of the paper, we have substantially refactored the codebase, with many usability improvements and updated dependencies. As a consequence, the results of the evaluation and training might slightly deviate from (and often improve over) the original numbers found in our CVPR 2021 paper. If you are writing a paper, for consistency with the literature, please report the original numbers. If you are building on top of this codebase, consider reporting the latest numbers for fairness.

BibTex Citation

Please consider citing our work if you use any of the ideas presented the paper or code from this repo:

@inproceedings{sarlin21pixloc,
  author    = {Paul-Edouard Sarlin and
               Ajaykumar Unagar and
               Mรฅns Larsson and
               Hugo Germain and
               Carl Toft and
               Victor Larsson and
               Marc Pollefeys and
               Vincent Lepetit and
               Lars Hammarstrand and
               Fredrik Kahl and
               Torsten Sattler},
  title     = {{Back to the Feature: Learning Robust Camera Localization from Pixels to Pose}},
  booktitle = {CVPR},
  year      = {2021},
}

Logo font by Cyril Bourreau.

More Repositories

1

LightGlue

LightGlue: Local Feature Matching at Light Speed (ICCV 2023)
Python
3,261
star
2

Hierarchical-Localization

Visual localization made easy with hloc
Python
3,054
star
3

nice-slam

[CVPR'22] NICE-SLAM: Neural Implicit Scalable Encoding for SLAM
Python
1,415
star
4

pixel-perfect-sfm

Pixel-Perfect Structure-from-Motion with Featuremetric Refinement (ICCV 2021, Best Student Paper Award)
C++
1,317
star
5

glue-factory

Training library for local feature detection and matching
Python
702
star
6

limap

A toolbox for mapping and localization with line features.
C++
676
star
7

GlueStick

Joint Deep Matcher for Points and Lines ๐Ÿ–ผ๏ธ๐Ÿ’ฅ๐Ÿ–ผ๏ธ (ICCV 2023)
Jupyter Notebook
543
star
8

SOLD2

Joint deep network for feature line detection and description
Jupyter Notebook
541
star
9

DeepLSD

Implementation of the paper "DeepLSD: Line Segment Detection and Refinement with Deep Image Gradients"
Jupyter Notebook
453
star
10

sfm-disambiguation-colmap

Making Structure-from-Motion (COLMAP) more robust to symmetries and duplicated structures
Python
277
star
11

pyceres

Factor graphs with Ceres in Python
C++
233
star
12

visloc-iccv2021

ETH-Microsoft dataset for the ICCV 2021 visual localization challenge
203
star
13

EMAP

[CVPR'24] 3D Neural Edge Reconstruction
Python
147
star
14

nicer-slam

[3DV'24 Best Paper Honorable Mention] NICER-SLAM: Neural Implicit Scene Encoding for RGB SLAM
Python
146
star
15

nerf-on-the-go

[CVPR'24] NeRF On-the-go: Exploiting Uncertainty for Distractor-free NeRFs in the Wild
Python
124
star
16

VP-Estimation-with-Prior-Gravity

Vanishing Point Estimation in Uncalibrated Images with Prior Gravity Direction (ICCV 2023)
C++
90
star
17

glace

[CVPR 2024] GLACE: Global Local Accelerated Coordinate Encoding
Python
68
star
18

px-ros-pkg

A repository for PIXHAWK open source code running on ROS
C
50
star
19

LabelMaker

Jupyter Notebook
50
star
20

raybender

Fast CPU rendering in Python using the Intelยฎ Embree backend
C++
38
star
21

pcdmeshing

Point cloud meshing with CGAL
C++
38
star
22

implicit_dist

C++
23
star
23

hololens_ros

C#
12
star
24

depthsplat

DepthSplat: Connecting Gaussian Splatting and Depth
5
star
25

spot_pose_estimation

Python
2
star