🔥 News
30/03/2022
😉 Code of Level-S2 fM Release
Level-S2fM: Structure from Motion on Neural Level Set of Implicit Surfaces
Project Page | Paper | Data
Level-S2fM: Structure from Motion on Neural Level Set of Implicit Surfaces
Yuxi Xiao, Nan Xue, Tianfu Wu, Gui-Song Xia
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR2023)
TL;DR:
Level-S2fM presents an incremental neural Structure from Motion(SfM) pipeline based on the neural level-sets representation. It leverages the 2D image matches and neural rendering to drive the joint optimization of Camera Poses, SDF and Radiance Fields.
CUDA Extension:
Set up a conda environment, and install# create the environment from the yaml
conda env create -f env.yaml
conda activate levels2fm
Install vren from the ngp_pl
-
Clone ngp_pl by
git clone https://github.com/kwea123/ngp_pl
-
get into the directory of ngp_pl
-
install the cuda extension
pip install models/csrc
Prepare Data
In our default setting, Level-S2 fM depends on the 2D image Matches by SIFT. To leverage existing solutions and avoid redundancy, we directly uses the SIFT matches and pose graph from COLMAP. We provide our processed data in Google Drive. Please download and unzip
them into the folder ./data
for runing.
Reconstruction with Level-S2fM
Running Default Version
In our default version, our Level-S2 fM uses the SDF-based Triangulation and Neural Bundle Adjustment, where the SDF plays as a top-down regularization to manage the sparse pointset with feature track and filter the outliers.
python train.py --group=<group_name_exp> --pipeline=LevelS2fM --yaml=<config file> --name=<exp_name> --data.dataset=<dataset> --data.scene=<scene_name> --sfm_mode=full --Ablate_config.dual_field=true
Running with Some Ablations
Trying our Level-S2 fM with the traditional triangulation:
python train.py --group=<group_name_exp> --pipeline=LevelS2fM --yaml=<config file> --name=<exp_name> --data.dataset=<dataset> --data.scene=<scene_name> --sfm_mode=full --Ablate_config.dual_field=true --Ablate_config.tri_trad=true
Trying our Level-S2 fM with the traditional Bundle Adjustment:
python train.py --group=<group_name_exp> --pipeline=LevelS2fM --yaml=<config file> --name=<exp_name> --data.dataset=<dataset> --data.scene=<scene_name> --sfm_mode=full --Ablate_config.dual_field=true --Ablate_config.tri_trad=true --Ablate_config.ba_trad=true
Running with provided Scripts
cd
into ./scripts
, and run the script file like:
CUDA_VISIBLE_DEVICES=<GPU> sh train_ETH3D.sh
Creating your own dataset
A complete Instruction is coming soon!
Tips
Coming with instrcution.
Comments
Our Level-S2fM provides a new perspective to revisit the traditional sparse 3D reconstruction (SfM) with Neural Field Representation and Neural Rendering. This work may contribute to let you see the capability of a simple coordinate MLP in SfM. However, It's not going to be very mature system yet like COLMAP, and we are continuing to refine it in the future.
Acknowledgements
- Thanks to Johannes Schönberger for his excellent work COLMAP.
- Thanks to Thomas Müller for his excellent work tiny-cuda-nn and Instant NGP
- Thanks to Lior Yariv for her excellent work VolSDF.
- Thanks to AI Aoi for his excellent implementation of Instant NGP by pytorch ngp_pl.
- Thanks to Sida Peng for his valuable suggestions and discussions for our Level-S2 fM
BibTeX
@inproceedings{xiao2022level,
title = {Level-S\({}^{\mbox{2}}\)fM: Structure from Motion on Neural Level
Set of Implicit Surfaces},
author={Yuxi Xiao and Nan Xue and Tianfu Wu and Gui-Song Xia},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023}
}