• Stars
    star
    399
  • Rank 108,092 (Top 3 %)
  • Language
    Python
  • License
    Other
  • Created almost 2 years ago
  • Updated 12 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[ ICLR 2023 Spotlight ] Pytorch implementation for "Voxurf: Voxel-based Efficient and Accurate Neural Surface Reconstruction"

Voxurf: Voxel-based Efficient and Accurate Neural Surface Reconstruction

Tong Wu  Jiaqi Wang  Xingang Pan  Xudong Xu  Christian Theobalt  Ziwei Liu  Dahua Lin 

Accepted to ICLR 2023 (Spotlight)

Paper

github_teaser_AdobeExpress.mp4

Updates

  • [2023-03] Code released.
  • [2023-01] 🥳 Voxurf is accepted to ICLR 2023 (Spotlight)!

Installation

Please first install a suitable version of Pytorch and torch_scatter on your machine. We tested on CUDA 11.1 with Pytorch 1.10.0.

git clone [email protected]/wutong16/Voxurf.git
cd Voxurf
pip install -r requirements.txt

Datasets

Public datasets

Extract the datasets to ./data/.

Custom data

For your own data (e.g., a video or multi-view images), go through the preprocessing steps below.

Preprocessing (click to expand)
  • Please install COLMAP and rembg first.

  • Extract video frames (if needed), remove the background, and save the masks.

mkdir data/<your-data-dir>
cd tools/preprocess
bash run_process_video.sh ../../data/<your-data-dir> <your-video-dir>
  • Estimate camera poses using COLMAP, and normalize them following IDR.
bash run_convert_camera.sh ../../data/<your-data-dir>
  • Finally, use configs/custom_e2e and run with --scene <your-data-dir>.

Running

Training

  • You could find all the config files for the included datasets under ./configs.
  • To train on a set of images with a white/black background (recommended), use the corresponding config file and select a scene:
bash single_runner.sh <config_folder> <workdir> <scene>

# DTU example
bash single_runner.sh configs/dtu_e2e exp 122
  • To train without foreground mask on DTU:
# DTU example
bash single_runner_womask.sh configs/dtu_e2e_womask exp 122
  • To train without foreground mask on MobileBrick. The full evaluation on MobileBrick compared with other methods can be found here.
# MobileBrick example
bash single_runner_womask.sh configs/mobilebrick_e2e_womask/ exp <scene>

Note For Windows users, please use the provided batch scripts with extension.bat instead of the bash scripts with extension .sh Additionally, the forward slashes / in the paths should be replaced with backslashes \. A batch script can be run simply through <script_name>.bat <arg1> ... <argN>.

NVS evaluation

python run.py --config <config_folder>/fine.py -p <workdir> --sdf_mode voxurf_fine --scene <scene> --render_only --render_test

Extracting the mesh & evaluation

python run.py --config <config_folder>/fine.py -p <workdir> --sdf_mode voxurf_fine --scene <scene> --render_only --mesh_from_sdf

Add --extract_color to get a colored mesh as below. It is out of the scope of this work to estimate the material, albedo, and illumination. We simply use the normal direction as the view direction to get the vertex colors.

colored_mesh (1)

Citation

If you find the code useful for your research, please cite our paper.

@inproceedings{wu2022voxurf,
    title={Voxurf: Voxel-based Efficient and Accurate Neural Surface Reconstruction},
    author={Tong Wu and Jiaqi Wang and Xingang Pan and Xudong Xu and Christian Theobalt and Ziwei Liu and Dahua Lin},
    booktitle={International Conference on Learning Representations (ICLR)},
    year={2023},
}

Acknowledgement

Our code is heavily based on DirectVoxGO and NeuS. Some of the preprocessing code is borrowed from IDR and LLFF. Thanks to the authors for their awesome works and great implementations! Please check out their papers for more details.