• Stars
    star
    319
  • Rank 130,705 (Top 3 %)
  • Language
    Python
  • License
    MIT License
  • Created over 2 years ago
  • Updated 8 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

SparseNeuS: Fast Generalizable Neural Surface Reconstruction from Sparse views

SparseNeuS: Fast Generalizable Neural Surface Reconstruction from Sparse Views [ECCV2022]

We present a novel neural surface reconstruction method, called SparseNeuS, which can generalize to new scenes and work well with sparse images (as few as 2 or 3).

Project Page | Paper

Setup

Dependencies

  • pytorch
  • torchsparse
  • opencv_python
  • trimesh
  • numpy
  • pyhocon
  • icecream
  • tqdm
  • scipy
  • PyMCubes

Dataset

  • DTU Training dataset. Please download the preprocessed DTU dataset provided by MVSNet. As stated in the paper, we preprocess the images to obtain the masks about the "black empty background" to remove image noises. The preprocessed masks can be downloaded here. Training without the masks will not be a problem, just ignore the "masks" in the dataloader.
  • DTU testing dataset. Since our target neural reconstruction with sparse views, we select two set of three images from the 15 testing scenes (same as IDR) for evaluation. Download our prepared testing dataset.

Easy to try

Just run the provided bash file to get the teaser result.

bash ./sample_bashs/dtu_scan118.sh

Training

Our training has two stages. First train the coarse level and then the fine level.

python exp_runner_generic.py --mode train --conf ./confs/general_lod0.conf
python exp_runner_generic.py --mode train --conf ./confs/general_lod1.conf --is_continue --restore_lod0

Finetuning

The reconstructed results generated by generic model can be further improved using our consistency-aware fine-tuning scheme.

The parameters 'visibility_beta' and 'visibility_gama' control the consistency level, which decide how much regions of the scene are kept.

For few cases with weak texture or noises, improper 'visibility_beta' and 'visibility_gama' will easily cause empty result. To make the optimization more robust, the 'visibility_weight_thred' is introduced to avoid all regions of the scene are discarded.

#!/usr/bin/env bash
python exp_runner_finetune.py \
--mode train --conf ./confs/finetune.conf --is_finetune \
--checkpoint_path ./weights/ckpt.pth \
--case_name scan118  --train_imgs_idx 0 1 2 --test_imgs_idx 0 1 2 --near 700 --far 1100 \
--visibility_beta 0.025 --visibility_gama 0.010 --visibility_weight_thred 0.7 0.6 0.5

Results

You can download the DTU results and BMVS results of the paper reports here.

Citation

Cite as below if you find this repository is helpful to your project:

@article{long2022sparseneus,
          title={SparseNeuS: Fast Generalizable Neural Surface Reconstruction from Sparse views},
          author={Long, Xiaoxiao and Lin, Cheng and Wang, Peng and Komura, Taku and Wang, Wenping},
          journal={ECCV},
          year={2022}
        }

Acknowledgement

Some code snippets are borrowed from IDR, NeuS and IBRNet. Thanks for these great projects.