• Stars
    star
    166
  • Rank 227,101 (Top 5 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 2 years ago
  • Updated about 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Repository of MatchFormer

MatchFormer

MatchFormer: Interleaving Attention in Transformers for Feature Matching

Qing Wangโˆ—, Jiaming Zhangโˆ—, Kailun Yangโ€ , Kunyu Peng, Rainer Stiefelhagen

โˆ— denotes equal contribution and โ€  denotes corresponding author

News

  • [09/2022] MatchFormer [PDF] is accepted to ACCV2022.

matchformer

Introduction

In this work, we propose a novel hierarchical extract-and-match transformer, termed as MatchFormer. Inside each stage of the hierarchical encoder, we interleave self-attention for feature extraction and cross-attention for feature matching, enabling a human-intuitive extract-and-match scheme.

More detailed can be found in our arxiv paper.

Installation

The requirements are listed in the requirement.txt file. To create your own environment, an example is:

conda create -n matchformer python=3.7
conda activate matchformer
cd /path/to/matchformer
pip install -r requirement.txt

Datasets

You can prepare the test dataset in the same way as LoFTR, place the dataset and index in the data directory.

A structure of dataset should be:

data
โ”œโ”€โ”€ scannet
โ”‚ย ย  โ”œโ”€โ”€ index
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ intrinsics.npz
โ”‚ย ย  โ”‚   โ”œโ”€โ”€ scannet_test.txt
โ”‚   โ”‚   โ””โ”€โ”€ test.npz
โ”‚ย ย  โ””โ”€โ”€ test
โ”‚ย ย  	โ”œโ”€โ”€ scene0707_00
โ”‚   	โ”œโ”€โ”€ ...
โ”‚   	โ””โ”€โ”€ scene0806_00
โ””โ”€โ”€ megadepth
   โ”œโ”€โ”€ index
   โ”‚	  โ”œโ”€โ”€ 0015_0.1_0.3.npz
ย ย  โ”‚	  โ”œโ”€โ”€ ...
   โ”‚	  โ”œโ”€โ”€ 0022_0.5_0.7.npz
   โ”‚	  โ””โ”€โ”€ megadepth_test_1500.txt
   โ””โ”€โ”€ test
   	  โ”œโ”€โ”€ Undistorted_SfM
   	  โ””โ”€โ”€ phoenix

Evaluation

The evaluation configurations can be adjusted at /config/defaultmf.py

The weights can be downloaded in Google Drive.

Put the weight at model/weights.

Indoor:

# adjust large SEA model config:
MATCHFORMER.BACKBONE_TYPE = 'largesea'
MATCHFORMER.SCENS = 'indoor'
MATCHFORMER.RESOLUTION = (8,2)
MATCHFORMER.COARSE.D_MODEL = 256
MATCHFORMER.COARSE.D_FFN = 256

python test.py /config/data/scannet_test_1500.py --ckpt_path /model/weights/indoor-large-SEA.ckpt --gpus=1 --accelerator="ddp"
# adjust lite LA model config:
MATCHFORMER.BACKBONE_TYPE = 'litela'
MATCHFORMER.SCENS = 'indoor'
MATCHFORMER.RESOLUTION = (8,4)
MATCHFORMER.COARSE.D_MODEL = 192
MATCHFORMER.COARSE.D_FFN = 192

python test.py /config/data/scannet_test_1500.py --ckpt_path /model/weights/indoor-lite-LA.ckpt --gpus=1 --accelerator="ddp"

Outdoor:

# adjust large LA model config:
MATCHFORMER.BACKBONE_TYPE = 'largela'
MATCHFORMER.SCENS = 'outdoor'
MATCHFORMER.RESOLUTION = (8,2)
MATCHFORMER.COARSE.D_MODEL = 256
MATCHFORMER.COARSE.D_FFN = 256

python test.py /config/data/megadepth_test_1500.py --ckpt_path /model/weights/outdoor-large-LA.ckpt --gpus=1 --accelerator="ddp"
# adjust lite SEA model config:
MATCHFORMER.BACKBONE_TYPE = 'litesea'
MATCHFORMER.SCENS = 'outdoor'
MATCHFORMER.RESOLUTION = (8,4)
MATCHFORMER.COARSE.D_MODEL = 192
MATCHFORMER.COARSE.D_FFN = 192

python test.py /config/data/megadepth_test_1500.py --ckpt_path /model/weights/indoor-large-SEA.ckpt --gpus=1 --accelerator="ddp"

Training

Based on the LOFTER code to train MatchFormer, replace LoFTR/src/loftr/backbone/ with model/backbone/match_**.py to train.

Citation

If you are interested in this work, please cite the following work:

@inproceedings{wang2022matchformer,
  title={MatchFormer: Interleaving Attention in Transformers for Feature Matching},
  author={Wang, Qing and Zhang, Jiaming and Yang, Kailun and Peng, Kunyu and Stiefelhagen, Rainer},
  booktitle={Asian Conference on Computer Vision},
  year={2022}
}

Acknowledgments

Our work is based on LoFTR and we use their code. We appreciate the previous open-source repository LoFTR.