• Stars
    star
    268
  • Rank 153,144 (Top 4 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 3 years ago
  • Updated almost 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

(CVPR 2022) TransMVSNet: Global Context-aware Multi-view Stereo Network with Transformers.

(CVPR2022) TransMVSNet: Global Context-aware Multi-view Stereo Network with Transformers

Paper | Project Page | Arxiv | Models

Tips: If you meet any problems when reproduce our results, please contact Yikang Ding ([email protected]). We are happy to help you solve the problems and share our experience.

⚠ Change log

  • 09.2022: Add more detailed instruction of how to reproduce the reported results (see testing-on-dtu).
  • 09.2022: Fix the bugs in MATLAB evaluation code (remove the debug code).
  • 09.2022: Fix the bug of default fuse parameters of gipuma, which could have a great impact on the final results.
  • 09.2022: Update the website link and instruction of installing gipuma, which would affect the fusion quality.

πŸ“” Introduction

In this paper, we present TransMVSNet, based on our exploration of feature matching in multi-view stereo (MVS). We analogize MVS back to its nature of a feature matching task and therefore propose a powerful Feature Matching Transformer (FMT) to leverage intra- (self-) and inter- (cross-) attention to aggregate long-range context information within and across images. To facilitate a better adaptation of the FMT, we leverage an Adaptive Receptive Field (ARF) module to ensure a smooth transit in scopes of features and bridge different stages with a feature pathway to pass transformed features and gradients across different scales. In addition, we apply pair-wise feature correlation to measure similarity between features, and adopt ambiguity-reducing focal loss to strengthen the supervision. To the best of our knowledge, TransMVSNet is the first attempt to leverage Transformer into the task of MVS. As a result, our method achieves state-of-the-art performance on DTU dataset, Tanks and Temples benchmark, and BlendedMVS dataset.

πŸ”§ Installation

Our code is tested with Python==3.6/3.7/3.8, PyTorch==1.6.0/1.7.0/1.9.0, CUDA==10.2 on Ubuntu-18.04 with NVIDIA GeForce RTX 2080Ti. Similar or higher version should work well.

To use TransMVSNet, clone this repo:

git clone https://github.com/MegviiRobot/TransMVSNet.git
cd TransMVSNet

We highly recommend using Anaconda to manage the python environment:

conda create -n transmvsnet python=3.6
conda activate transmvsnet
pip install -r requirements.txt

We also recommend using apex, you can install apex from the official repo.

πŸ“¦ Data preparation

In TransMVSNet, we mainly use DTU, BlendedMVS and Tanks and Temples to train and evaluate our models. You can prepare the corresponding data by following the instructions below.

βœ” DTU

For DTU training set, you can download the preprocessed DTU training data and Depths_raw (both from Original MVSNet), and unzip them to construct a dataset folder like:

dtu_training
 β”œβ”€β”€ Cameras
 β”œβ”€β”€ Depths
 β”œβ”€β”€ Depths_raw
 └── Rectified

For DTU testing set, you can download the preprocessed DTU testing data (from Original MVSNet) and unzip it as the test data folder, which should contain one cams folder, one images folder and one pair.txt file.

βœ” BlendedMVS

We use the low-res set of BlendedMVS dataset for both training and testing. You can download the low-res set from orignal BlendedMVS and unzip it to form the dataset folder like below:

BlendedMVS
 β”œβ”€β”€ 5a0271884e62597cdee0d0eb
 β”‚     β”œβ”€β”€ blended_images
 β”‚     β”œβ”€β”€ cams
 β”‚     └── rendered_depth_maps
 β”œβ”€β”€ 59338e76772c3e6384afbb15
 β”œβ”€β”€ 59f363a8b45be22330016cad
 β”œβ”€β”€ ...
 β”œβ”€β”€ all_list.txt
 β”œβ”€β”€ training_list.txt
 └── validation_list.txt

βœ” Tanks and Temples

Download our preprocessed Tanks and Temples dataset and unzip it to form the dataset folder like below:

tankandtemples
 β”œβ”€β”€ advanced
 β”‚  β”œβ”€β”€ Auditorium
 β”‚  β”œβ”€β”€ Ballroom
 β”‚  β”œβ”€β”€ ...
 β”‚  └── Temple
 └── intermediate
        β”œβ”€β”€ Family
        β”œβ”€β”€ Francis
        β”œβ”€β”€ ...
        └── Train

πŸ“ˆ Training

βœ” Training on DTU

Set the configuration in scripts/train.sh:

  • Set MVS_TRAINING as the path of DTU training set.
  • Set LOG_DIR to save the checkpoints.
  • Change NGPUS to suit your device.
  • We use torch.distributed.launch by default.

To train your own model, just run:

bash scripts/train.sh

You can conveniently modify more hyper-parameters in scripts/train.sh according to the argparser in train.py, such as summary_freq, save_freq, and so on.

βœ” Finetune on BlendedMVS

For a fair comparison with other SOTA methods on Tanks and Temples benchmark, we finetune our model on BlendedMVS dataset after training on DTU dataset.

Set the configuration in scripts/train_bld_fintune.sh:

  • Set MVS_TRAINING as the path of BlendedMVS dataset.
  • Set LOG_DIR to save the checkpoints and training log.
  • Set CKPT as path of the loaded .ckpt which is trained on DTU dataset.

To finetune your own model, just run:

bash scripts/train_bld_fintune.sh

πŸ“Š Testing

For easy testing, you can download our pre-trained models and put them in checkpoints folder, or use your own models and follow the instruction below.

βœ” Testing on DTU

Important Tips: to reproduce our reported results, you need to:

  • compile and install the modified gipuma from Yao Yao as introduced below
  • use the latest code as we have fixed tiny bugs and updated the fusion parameters
  • make sure you install the right version of python and pytorch, use some old versions would throw warnings of the default action of align_corner in several functions, which would affect the final results
  • be aware that we only test the code on 2080Ti and Ubuntu 18.04, other devices and systems might get slightly different results
  • make sure that you use the model_dtu.ckpt for testing

To start testing, set the configuration in scripts/test_dtu.sh:

  • Set TESTPATH as the path of DTU testing set.
  • Set TESTLIST as the path of test list (.txt file).
  • Set CKPT_FILE as the path of the model weights.
  • Set OUTDIR as the path to save results.

Run:

bash scripts/test_dtu.sh

Note: You can use the gipuma fusion method or normal fusion method to fuse the point clouds. In our experiments, we use the gipuma fusion method by default. With using the uploaded ckpt and latest code, these two fusion methods would get the below results:

Fuse Overall
gipuma 0.304
normal 0.314

To install the gipuma, clone the modified version from Yao Yao. Modify the line-10 in CMakeLists.txt to suit your GPUs. Othervise you would meet warnings when compile it, which would lead to failure and get 0 points in fused point cloud. For example, if you use 2080Ti GPU, modify the line-10 to:

set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS};-O3 --use_fast_math --ptxas-options=-v -std=c++11 --compiler-options -Wall -gencode arch=compute_70,code=sm_70)

If you use other kind of GPUs, please modify the arch code to suit your device (arch=compute_XX,code=sm_XX). Then install it by cmake . and make, which will generate the executable file at FUSIBILE_EXE_PATH. Please note

For quantitative evaluation on DTU dataset, download SampleSet and Points. Unzip them and place Points folder in SampleSet/MVS Data/. The structure looks like:

SampleSet
β”œβ”€β”€MVS Data
      └──Points

In DTU-MATLAB/BaseEvalMain_web.m, set dataPath as path to SampleSet/MVS Data/, plyPath as directory that stores the reconstructed point clouds and resultsPath as directory to store the evaluation results. Then run DTU-MATLAB/BaseEvalMain_web.m in matlab.

We also upload our final point cloud results to here. You can easily download them and evaluate them using the MATLAB scripts, the results look like:

Acc. (mm) Comp. (mm) Overall (mm)
0.321 0.289 0.305

βœ” Testing on Tanks and Temples

We recommend using the finetuned models (model_bld.ckpt) to test on Tanks and Temples benchmark.

Similarly, set the configuration in scripts/test_tnt.sh:

  • Set TESTPATH as the path of intermediate set or advanced set.
  • Set TESTLIST as the path of test list (.txt file).
  • Set CKPT_FILE as the path of the model weights.
  • Set OUTDIR as the path to save resutls.

To generate point cloud results, just run:

bash scripts/test_tnt.sh

Note that:

  • The parameters of point cloud fusion have not been studied thoroughly and the performance can be better if cherry-picking more appropriate thresholds for each of the scenes.
  • The dynamic fusion code is borrowed from AA-RMVSNet.

For quantitative evaluation, you can upload your point clouds to Tanks and Temples benchmark.

πŸ”— Citation

@inproceedings{ding2022transmvsnet,
  title={Transmvsnet: Global context-aware multi-view stereo network with transformers},
  author={Ding, Yikang and Yuan, Wentao and Zhu, Qingtian and Zhang, Haotian and Liu, Xiangyue and Wang, Yuanjiang and Liu, Xiao},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={8585--8594},
  year={2022}
}

πŸ“Œ Acknowledgments

We borrow some code from CasMVSNet, LoFTR and AA-RMVSNet. We thank the authors for releasing the source code.

More Repositories

1

NAFNet

The state-of-the-art image restoration model without nonlinear activation functions.
Python
2,195
star
2

ML-GCN

PyTorch implementation of Multi-Label Image Recognition with Graph Convolutional Networks, CVPR 2019.
Python
1,408
star
3

PETR

[ECCV2022] PETR: Position Embedding Transformation for Multi-View 3D Object Detection & [ICCV2023] PETRv2: A Unified Framework for 3D Perception from Multi-Camera Images
Python
862
star
4

video_analyst

A series of basic algorithms that are useful for video understanding, including Single Object Tracking (SOT), Video Object Segmentation (VOS) and so on.
Python
829
star
5

mdistiller

The official implementation of [CVPR2022] Decoupled Knowledge Distillation https://arxiv.org/abs/2203.08679 and [ICCV2023] DOT: A Distillation-Oriented Trainer https://openaccess.thecvf.com/content/ICCV2023/papers/Zhao_DOT_A_Distillation-Oriented_Trainer_ICCV_2023_paper.pdf
Python
801
star
6

IJCAI2023-CoNR

IJCAI2023 - Collaborative Neural Rendering using Anime Character Sheets
Jupyter Notebook
797
star
7

HiDiffusion

[ECCV 2024] HiDiffusion: Increases the resolution and speed of your diffusion model by only adding a single line of code!
Jupyter Notebook
752
star
8

megactor

Python
742
star
9

BBN

The official PyTorch implementation of paper BBN: Bilateral-Branch Network with Cumulative Learning for Long-Tailed Visual Recognition
Python
659
star
10

MOTR

[ECCV2022] MOTR: End-to-End Multiple-Object Tracking with TRansformer
Python
614
star
11

neural-painter

Paint artistic patterns using random neural network.
Python
532
star
12

CREStereo

Official MegEngine implementation of CREStereo(CVPR 2022 Oral).
Python
483
star
13

megvii-pku-dl-course

Homepage for the joint course of Megvii Inc. and Peking University on Deep Learning.
Python
445
star
14

MOTRv2

[CVPR2023] MOTRv2: Bootstrapping End-to-End Multi-Object Tracking by Pretrained Object Detectors
Python
364
star
15

AnchorDETR

An official implementation of the Anchor DETR.
Python
335
star
16

MSPN

Multi-Stage Pose Network
Python
334
star
17

Sparsebit

A model compression and acceleration toolbox based on pytorch.
Python
325
star
18

FQ-ViT

[IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
Python
304
star
19

FSCE

Python
280
star
20

OccDepth

Maybe the first academic open work on stereo 3D SSC method with vision-only input.
Python
278
star
21

RevCol

Official Code of Paper "Reversible Column Networks" "RevColv2"
Python
248
star
22

TLC

Test-time Local Converter
Python
229
star
23

DCLS-SR

Official PyTorch implementation of the paper "Deep Constrained Least Squares for Blind Image Super-Resolution", CVPR 2022.
Python
220
star
24

SOLQ

"SOLQ: Segmenting Objects by Learning Queries", SOLQ is an end-to-end instance segmentation framework with Transformer.
Python
198
star
25

introduction-neural-3d-reconstruction

Course materials for Introduction to Neural 3D Reconstruction
185
star
26

AAAI2023-PVD

Official Implementation of PVD and PVDAL: http://sk-fun.fun/PVD-AL/
Python
183
star
27

tf-tutorials

Tutorials for deep learning course here:
Jupyter Notebook
180
star
28

DPGN

[CVPR 2020] DPGN: Distribution Propagation Graph Network for Few-shot Learning.
Python
178
star
29

CADDM

Official implementation of ID-unaware Deepfake Detection Model
C++
146
star
30

Far3D

[AAAI2024] Far3D: Expanding the Horizon for Surround-view 3D Object Detection
Jupyter Notebook
140
star
31

PMN

[TPAMI 2023 / ACMMM 2022 Best Paper Runner-Up Award] Learnability Enhancement for Low-light Raw Denoising: Where Paired Real Data Meets Noise Modeling (a Data Perspective)
Python
131
star
32

megfile

Megvii FILE Library - Working with Files in Python same as the standard library
Python
123
star
33

CR-DA-DET

The official PyTorch implementation of paper Exploring Categorical Regularization for Domain Adaptive Object Detection (CR-DA-DET)
Python
115
star
34

CVPR2023-UniDistill

CVPR2023 (highlight) - UniDistill: A Universal Cross-Modality Knowledge Distillation Framework for 3D Object Detection in Bird's-Eye View
Python
103
star
35

TreeEnergyLoss

[CVPR2022] Tree Energy Loss: Towards Sparsely Annotated Semantic Segmentation
Python
102
star
36

hpman

A hyperparameter manager for deep learning experiments.
Python
95
star
37

RealFlow

The official implementation of the ECCV 2022 Oral paper: RealFlow: EM-based Realistic Optical Flow Dataset Generation from Videos
Python
93
star
38

HDR-Transformer

The official MegEngine implementation of the ECCV 2022 paper: Ghost-free High Dynamic Range Imaging with Context-aware Transformer
Python
90
star
39

Iter-E2EDET

Official implementation of the paper "Progressive End-to-End Object Detection in Crowded Scenes"
Python
88
star
40

cv-master-ex

torch version of instant-ngp, image rendering
C++
80
star
41

FSSD_OoD_Detection

[SafeAI'21] Feature Space Singularity for Out-of-Distribution Detection.
Python
80
star
42

SSQL-ECCV2022

PyTorch implementation of SSQL (Accepted to ECCV2022 oral presentation)
Python
75
star
43

expman

Shell
62
star
44

BasesHomo

The official PyTorch implementation of the paper "Motion Basis Learning for Unsupervised Deep Homography Estimation with Subspace Projection".
Python
61
star
45

megvii-tsinghua-dl-course

Slides with modifications for a course at Tsinghua University.
57
star
46

LGD

Official Implementation of the detection self-distillation framework LGD.
Python
53
star
47

protoclip

πŸ“ Official pytorch implementation of paper "ProtoCLIP: Prototypical Contrastive Language Image Pretraining" (IEEE TNNLS)
Python
46
star
48

D2C-SR

Official MegEngine implementation of ECCV2022 "D2C-SR: A Divergence to Convergence Approach for Real-World Image Super-Resolution".
Python
44
star
49

HomoGAN

This is the official implementation of HomoGAN, CVPR2022
Python
44
star
50

FullMatch

Official implementation of FullMatch (CVPR2023)
Python
44
star
51

KD-MVS

Code for ECCV2022 paper 'KD-MVS: Knowledge Distillation Based Self-supervised Learning for Multi-view Stereo'
Python
44
star
52

AGFlow

Learning Optical Flow with Adaptive Graph Reasoning (AGFlow, AAAI-2022)
Python
42
star
53

pytorch-gym

Implementation of the Deep Deterministic Policy Gradient(DDPG) in bullet Gym using pytorch
Python
41
star
54

TPS-CVPR2023

Python
41
star
55

KPAFlow

PyTorch implementation of KPA-Flow. Learning Optical Flow with Kernel Patch Attention (CVPR-2022)
Python
38
star
56

PCB

Official code for CVPR 2022 paper "Relieving Long-tailed Instance Segmentation via Pairwise Class Balance".
Python
37
star
57

FST-Matching

Official implementation of the FST-Matching Model.
Python
37
star
58

basecls

A codebase & model zoo for pretrained backbone based on MegEngine.
Python
32
star
59

US3L-CVPR2023

PyTorch implementation of US3L (Accepted to CVPR2023)
Python
31
star
60

Sobolev_INRs

[ECCV 2022] The official experimental code of "Sobolev Training for Implicit Neural Representations with Approximated Image Derivatives"
Python
30
star
61

Portraits_Correction

Python
29
star
62

basedet

An object detection codebase based on MegEngine.
Python
28
star
63

Co-mining

Co-mining: Self-Supervised Learning for Sparsely Annotated Object Detection, AAAI 2021.
Python
27
star
64

zipfls

This repo is the official megengine implementation of the ECCV2022 paper: Efficient One Pass Self-distillation with Zipf's Label Smoothing.
Python
25
star
65

tf-cpn

Cascade Pyramid Netwrok
Python
24
star
66

Arch-Net

Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Python
22
star
67

juicefs-python

JuiceFS Python SDK
Python
21
star
68

ED-Net

PyTorch implementation of A Lightweight Encoder-Decoder Path for Deep Residual Networks.
Python
19
star
69

IntLLaMA

IntLLaMA: A fast and light quantization solution for LLaMA
Python
18
star
70

CasPL

17
star
71

MSCL

[ECCV2022] Motion Sensitive Contrastive Learning for Self-supervised Video Representation
Python
17
star
72

LBHomo

This is the official PyTorch implementation of Semi-supervised Deep Large-baseline Homography Estimation with Progressive Equivalence Constraint, AAAI 2023
Python
17
star
73

RG-SENet_SP-SENet

PyTorch implementation of Delving Deep into Spatial Pooling for Squeeze-and-Excitation Networks.
Python
17
star
74

hpargparse

argparse extension for hpman
Python
16
star
75

Sparse-Beats-Dense

[ECCV 2024] Sparse Beats Dense: Rethinking Supervision in Radar-Camera Depth Completion
Python
15
star
76

MCTrack

This is the offical implementation of the paper "MCTrack: A Unified 3D Multi-Object Tracking Framework for Autonomous Driving"
Python
14
star
77

MEMD

Megvii Electric Moped Detector (ONNX based inference)
Python
13
star
78

DVN

Python
13
star
79

Occ2net

Jupyter Notebook
13
star
80

revisitAIRL

[ECCV2022] Revisiting the Critical Factors of Augmentation-Invariant Representation Learning
Python
11
star
81

megengine-face-recognition

Python
9
star
82

SimpleDG

This is the training and test code for ECCV2022 workshop NICO challenge
Python
7
star
83

GeneGAN

Pytorch version of GeneGAN
Python
7
star
84

basecore

basecore is a simple repo that provides deep learning frame for MegEngine.
Python
7
star
85

hpnevergrad

A nevergrad extension for hpman
Python
5
star
86

DRConv

Python
4
star
87

.github

2
star