• Stars
    star
    603
  • Rank 74,294 (Top 2 %)
  • Language
    Python
  • Created over 5 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

PyTorch Implementation of MVSNet

An Unofficial Pytorch Implementation of MVSNet

MVSNet: Depth Inference for Unstructured Multi-view Stereo. Yao Yao, Zixin Luo, Shiwei Li, Tian Fang, Long Quan. ECCV 2018. MVSNet is a deep learning architecture for depth map inference from unstructured multi-view images.

This is an unofficial Pytorch implementation of MVSNet

How to Use

Environment

  • python 3.6 (Anaconda)
  • pytorch 1.0.1

Training

  • Download the preprocessed DTU training data (Fixed training cameras, from Original MVSNet), and upzip it as the MVS_TRANING folder
  • in train.sh, set MVS_TRAINING as your training data path
  • create a logdir called checkpoints
  • Train MVSNet: ./train.sh

Testing

  • Download the preprocessed test data DTU testing data (from Original MVSNet) and unzip it as the DTU_TESTING folder, which should contain one cams folder, one images folder and one pair.txt file.
  • in test.sh, set DTU_TESTING as your testing data path and CKPT_FILE as your checkpoint file. You can also download my pretrained model.
  • Test MVSNet: ./test.sh

Fusion

in eval.py, I implemented a simple version of depth map fusion. Welcome contributions to improve the code.

Results on DTU

Acc. Comp. Overall.
MVSNet(D=256) 0.396 0.527 0.462
PyTorch-MVSNet(D=192) 0.4492 0.3796 0.4144

Due to the memory limit, we only train the model with D=192, the fusion code is also different from the original repo.