• Stars
    star
    702
  • Rank 64,499 (Top 2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created about 1 year ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Training library for local feature detection and matching

Glue Factory

Glue Factory is CVG's library for training and evaluating deep neural network that extract and match local visual feature. It enables you to:

  • Reproduce the training of state-of-the-art models for point and line matching, like LightGlue and GlueStick (ICCV 2023)
  • Train these models on multiple datasets using your own local features or lines
  • Evaluate feature extractors or matchers on standard benchmarks like HPatches or MegaDepth-1500


Point and line matching with LightGlue and GlueStick.

Installation

Glue Factory runs with Python 3 and PyTorch. The following installs the library and its basic dependencies:

git clone https://github.com/cvg/glue-factory
cd glue-factory
python3 -m pip install -e .  # editable mode

Some advanced features might require installing the full set of dependencies:

python3 -m pip install -e .[extra]

All models and datasets in gluefactory have auto-downloaders, so you can get started right away!

License

The code and trained models in Glue Factory are released with an Apache-2.0 license. This includes LightGlue and an open version of SuperPoint. Third-party models that are not compatible with this license, such as SuperPoint (original) and SuperGlue, are provided in gluefactory_nonfree, where each model might follow its own, restrictive license.

Evaluation

HPatches

Running the evaluation commands automatically downloads the dataset, by default to the directory data/. You will need about 1.8 GB of free disk space.

[Evaluating LightGlue]

To evaluate the pre-trained SuperPoint+LightGlue model on HPatches, run:

python -m gluefactory.eval.hpatches --conf superpoint+lightglue-official --overwrite

You should expect the following results

{'H_error_dlt@1px': 0.3515,
 'H_error_dlt@3px': 0.6723,
 'H_error_dlt@5px': 0.7756,
 'H_error_ransac@1px': 0.3428,
 'H_error_ransac@3px': 0.5763,
 'H_error_ransac@5px': 0.6943,
 'mnum_keypoints': 1024.0,
 'mnum_matches': 560.756,
 'mprec@1px': 0.337,
 'mprec@3px': 0.89,
 'mransac_inl': 130.081,
 'mransac_inl%': 0.217,
 'ransac_mAA': 0.5378}

The default robust estimator is opencv, but we strongly recommend to use poselib instead:

python -m gluefactory.eval.hpatches --conf superpoint+lightglue-official --overwrite \
    eval.estimator=poselib eval.ransac_th=-1

Setting eval.ransac_th=-1 auto-tunes the RANSAC inlier threshold by running the evaluation with a range of thresholds and reports results for the optimal value. Here are the results as Area Under the Curve (AUC) of the homography error at 1/3/5 pixels:

Methods DLT OpenCV PoseLib
SuperPoint + SuperGlue 32.1 / 65.0 / 75.7 32.9 / 55.7 / 68.0 37.0 / 68.2 / 78.7
SuperPoint + LightGlue 35.1 / 67.2 / 77.6 34.2 / 57.9 / 69.9 37.1 / 67.4 / 77.8
[Evaluating GlueStick]

To evaluate GlueStick on HPatches, run:

python -m gluefactory.eval.hpatches --conf gluefactory/configs/superpoint+lsd+gluestick.yaml --overwrite

You should expect the following results

{"mprec@1px": 0.245,
 "mprec@3px": 0.838,
 "mnum_matches": 1290.5,
 "mnum_keypoints": 2287.5,
 "mH_error_dlt": null,
 "H_error_dlt@1px": 0.3355,
 "H_error_dlt@3px": 0.6637,
 "H_error_dlt@5px": 0.7713,
 "H_error_ransac@1px": 0.3915,
 "H_error_ransac@3px": 0.6972,
 "H_error_ransac@5px": 0.7955,
 "H_error_ransac_mAA": 0.62806,
 "mH_error_ransac": null}

Since we use points and lines to solve for the homography, we use a different robust estimator here: Hest. Here are the results as Area Under the Curve (AUC) of the homography error at 1/3/5 pixels:

Methods DLT Hest
SP + LSD + GlueStick 33.6 / 66.4 / 77.1 39.2 / 69.7 / 79.6

MegaDepth-1500

Running the evaluation commands automatically downloads the dataset, which takes about 1.5 GB of disk space.

[Evaluating LightGlue]

To evaluate the pre-trained SuperPoint+LightGlue model on MegaDepth-1500, run:

python -m gluefactory.eval.megadepth1500 --conf superpoint+lightglue-official
# or the adaptive variant
python -m gluefactory.eval.megadepth1500 --conf superpoint+lightglue-official \
    model.matcher.{depth_confidence=0.95,width_confidence=0.95}

The first command should print the following results

{'mepi_prec@1e-3': 0.795,
 'mepi_prec@1e-4': 0.15,
 'mepi_prec@5e-4': 0.567,
 'mnum_keypoints': 2048.0,
 'mnum_matches': 613.287,
 'mransac_inl': 280.518,
 'mransac_inl%': 0.442,
 'rel_pose_error@10ยฐ': 0.681,
 'rel_pose_error@20ยฐ': 0.8065,
 'rel_pose_error@5ยฐ': 0.5102,
 'ransac_mAA': 0.6659}

To use the PoseLib estimator:

python -m gluefactory.eval.megadepth1500 --conf superpoint+lightglue-official \
    eval.estimator=poselib eval.ransac_th=2.0
[Evaluating GlueStick]

To evaluate the pre-trained SuperPoint+GlueStick model on MegaDepth-1500, run:

python -m gluefactory.eval.megadepth1500 --conf gluefactory/configs/superpoint+lsd+gluestick.yaml

Here are the results as Area Under the Curve (AUC) of the pose error at 5/10/20 degrees:

Methods pycolmap OpenCV PoseLib
SuperPoint + SuperGlue 54.4 / 70.4 / 82.4 48.7 / 65.6 / 79.0 64.8 / 77.9 / 87.0
SuperPoint + LightGlue 56.7 / 72.4 / 83.7 51.0 / 68.1 / 80.7 66.8 / 79.3 / 87.9
SIFT (2K) + LightGlue ? / ? / ? 43.5 / 61.5 / 75.9 60.4 / 74.3 / 84.5
SIFT (4K) + LightGlue ? / ? / ? 49.9 / 67.3 / 80.3 65.9 / 78.6 / 87.4
ALIKED + LightGlue ? / ? / ? 51.5 / 68.1 / 80.4 66.3 / 78.7 / 87.5
SuperPoint + GlueStick 53.2 / 69.8 / 81.9 46.3 / 64.2 / 78.1 64.4 / 77.5 / 86.5

ETH3D

The dataset will be auto-downloaded if it is not found on disk, and will need about 6 GB of free disk space.

[Evaluating GlueStick]

To evaluate GlueStick on ETH3D, run:

python -m gluefactory.eval.eth3d --conf gluefactory/configs/superpoint+lsd+gluestick.yaml

You should expect the following results

AP: 77.92
AP_lines: 69.22

Image Matching Challenge 2021

Coming soon!

Image Matching Challenge 2023

Coming soon!

Visual inspection

To inspect the evaluation visually, you can run:
python -m gluefactory.eval.inspect hpatches superpoint+lightglue-official

Click on a point to visualize matches on this pair.

To compare multiple methods on a dataset:

python -m gluefactory.eval.inspect hpatches superpoint+lightglue-official superpoint+superglue-official

All current benchmarks are supported by the viewer.

Detailed evaluation instructions can be found here.

Training

We generally follow a two-stage training:

  1. Pre-train on a large dataset of synthetic homographies applied to internet images. We use the 1M-image distractor set of the Oxford-Paris retrieval dataset. It requires about 450 GB of disk space.
  2. Fine-tune on the MegaDepth dataset, which is based on PhotoTourism pictures of popular landmarks around the world. It exhibits more complex and realistic appearance and viewpoint changes. It requires about 420 GB of disk space.

All training commands automatically download the datasets.

[Training LightGlue]

We show how to train LightGlue with SuperPoint. We first pre-train LightGlue on the homography dataset:

python -m gluefactory.train sp+lg_homography \  # experiment name
    --conf gluefactory/configs/superpoint+lightglue_homography.yaml

Feel free to use any other experiment name. By default the checkpoints are written to outputs/training/. The default batch size of 128 corresponds to the results reported in the paper and requires 2x 3090 GPUs with 24GB of VRAM each as well as PyTorch >= 2.0 (FlashAttention). Configurations are managed by OmegaConf so any entry can be overridden from the command line. If you have PyTorch < 2.0 or weaker GPUs, you may thus need to reduce the batch size via:

python -m gluefactory.train sp+lg_homography \
    --conf gluefactory/configs/superpoint+lightglue_homography.yaml  \
    data.batch_size=32  # for 1x 1080 GPU

Be aware that this can impact the overall performance. You might need to adjust the learning rate accordingly.

We then fine-tune the model on the MegaDepth dataset:

python -m gluefactory.train sp+lg_megadepth \
    --conf gluefactory/configs/superpoint+lightglue_megadepth.yaml \
    train.load_experiment=sp+lg_homography

Here the default batch size is 32. To speed up training on MegaDepth, we suggest to cache the local features before training (requires around 150 GB of disk space):

# extract features
python -m gluefactory.scripts.export_megadepth --method sp --num_workers 8
# run training with cached features
python -m gluefactory.train sp+lg_megadepth \
    --conf gluefactory/configs/superpoint+lightglue_megadepth.yaml \
    train.load_experiment=sp+lg_homography \
    data.load_features.do=True

The model can then be evaluated using its experiment name:

python -m gluefactory.eval.megadepth1500 --checkpoint sp+lg_megadepth

You can also run all benchmarks after each training epoch with the option --run_benchmarks.

[Training GlueStick]

We first pre-train GlueStick on the homography dataset:

python -m gluefactory.train gluestick_H --conf gluefactory/configs/superpoint+lsd+gluestick-homography.yaml --distributed

Feel free to use any other experiment name. Configurations are managed by OmegaConf so any entry can be overridden from the command line.

We then fine-tune the model on the MegaDepth dataset:

python -m gluefactory.train gluestick_MD --conf gluefactory/configs/superpoint+lsd+gluestick-megadepth.yaml --distributed

Note that we used the training splits train_scenes.txt and valid_scenes.txt to train the original model, which contains some overlap with the IMC challenge. The new default splits are now train_scenes_clean.txt and valid_scenes_clean.txt, without this overlap.

Available models

Glue Factory supports training and evaluating the following deep matchers:

Model Training? Evaluation?
LightGlue โœ… โœ…
GlueStick โœ… โœ…
SuperGlue โœ… โœ…
LoFTR โŒ โœ…

Using the following local feature extractors:

Model LightGlue config
SuperPoint (open) superpoint-open+lightglue_{homography,megadepth}.yaml
SuperPoint (official) superpoint+lightglue_{homography,megadepth}.yaml
SIFT (via pycolmap) sift+lightglue_{homography,megadepth}.yaml
ALIKED aliked+lightglue_{homography,megadepth}.yaml
DISK disk+lightglue_{homography,megadepth}.yaml
Key.Net + HardNet โŒ TODO

Coming soon

  • More baselines (LoFTR, ASpanFormer, MatchFormer, SGMNet, DKM, RoMa)
  • Training deep detectors and descriptors like SuperPoint
  • IMC evaluations
  • Better documentation

BibTeX Citation

Please consider citing the following papers if you found this library useful:

@InProceedings{lindenberger_2023_lightglue,
  title     = {{LightGlue: Local Feature Matching at Light Speed}},
  author    = {Philipp Lindenberger and
               Paul-Edouard Sarlin and
               Marc Pollefeys},
  booktitle = {International Conference on Computer Vision (ICCV)},
  year      = {2023}
}
@InProceedings{pautrat_suarez_2023_gluestick,
  title     = {{GlueStick: Robust Image Matching by Sticking Points and Lines Together}},
  author    = {R{\'e}mi Pautrat* and
               Iago Su{\'a}rez* and
               Yifan Yu and
               Marc Pollefeys and
               Viktor Larsson},
  booktitle = {International Conference on Computer Vision (ICCV)},
  year      = {2023}
}

More Repositories

1

LightGlue

LightGlue: Local Feature Matching at Light Speed (ICCV 2023)
Python
3,261
star
2

Hierarchical-Localization

Visual localization made easy with hloc
Python
3,054
star
3

nice-slam

[CVPR'22] NICE-SLAM: Neural Implicit Scalable Encoding for SLAM
Python
1,415
star
4

pixel-perfect-sfm

Pixel-Perfect Structure-from-Motion with Featuremetric Refinement (ICCV 2021, Best Student Paper Award)
C++
1,317
star
5

pixloc

Back to the Feature: Learning Robust Camera Localization from Pixels to Pose (CVPR 2021)
JavaScript
745
star
6

limap

A toolbox for mapping and localization with line features.
C++
676
star
7

GlueStick

Joint Deep Matcher for Points and Lines ๐Ÿ–ผ๏ธ๐Ÿ’ฅ๐Ÿ–ผ๏ธ (ICCV 2023)
Jupyter Notebook
543
star
8

SOLD2

Joint deep network for feature line detection and description
Jupyter Notebook
541
star
9

DeepLSD

Implementation of the paper "DeepLSD: Line Segment Detection and Refinement with Deep Image Gradients"
Jupyter Notebook
453
star
10

sfm-disambiguation-colmap

Making Structure-from-Motion (COLMAP) more robust to symmetries and duplicated structures
Python
277
star
11

pyceres

Factor graphs with Ceres in Python
C++
233
star
12

visloc-iccv2021

ETH-Microsoft dataset for the ICCV 2021 visual localization challenge
203
star
13

EMAP

[CVPR'24] 3D Neural Edge Reconstruction
Python
147
star
14

nicer-slam

[3DV'24 Best Paper Honorable Mention] NICER-SLAM: Neural Implicit Scene Encoding for RGB SLAM
Python
146
star
15

nerf-on-the-go

[CVPR'24] NeRF On-the-go: Exploiting Uncertainty for Distractor-free NeRFs in the Wild
Python
124
star
16

VP-Estimation-with-Prior-Gravity

Vanishing Point Estimation in Uncalibrated Images with Prior Gravity Direction (ICCV 2023)
C++
90
star
17

glace

[CVPR 2024] GLACE: Global Local Accelerated Coordinate Encoding
Python
68
star
18

px-ros-pkg

A repository for PIXHAWK open source code running on ROS
C
50
star
19

LabelMaker

Jupyter Notebook
50
star
20

raybender

Fast CPU rendering in Python using the Intelยฎ Embree backend
C++
38
star
21

pcdmeshing

Point cloud meshing with CGAL
C++
38
star
22

implicit_dist

C++
23
star
23

hololens_ros

C#
12
star
24

depthsplat

DepthSplat: Connecting Gaussian Splatting and Depth
5
star
25

spot_pose_estimation

Python
2
star