Deep learning methods have shown promising results in the area of 3D reconstruction. However, the existing 3D reconstruction projects like Colmap and OpenMVS are still based on traditional methods. Recently, the multi-view stereo methods, such as the MVSNet and its variants, have shown promising results in depth learning. Here, we build the 3D reconstruction project, which uses the learning based MVS methods for depth inferring.
The whole project is the complete 3D reconstruction system. We use the Colmap for SfM, CasMVSNet and D2HC-RMVSNet for depth inferring and OpenMVS for dense point-cloud reconstruction, mesh reconstruction and mesh texturing. We write the codes to combine them together so it can do 3D reconstruction end to end.
The online demo video is at https://www.zhihu.com/zvideo/1443954079655063552, which describes how to use the project and some 3D reconstruction results.
The highlights of our project are as follows:
- We build the first deep learning based 3D reconstruction project, named DeepMVS.
- DeepMVS is much faster and more accurate than OpenMVS.
- OS: Ubuntu 16.04 or 18.04
- NVIDIA GPU with CUDA>=10.0
For OpenMVS: Please refer to OpenMVS
For CasMVSNet_pl and D2HC-RMVSNet: Please refer to CasMVSNet_pl and D2HC-RMVSNet which are variants of MVSNet
We provide the docker image for environment:
docker pull minchen12345/deepmvs:latest
Note: We use the depth2dmap.py function to convert the output of MVSNets into the format of OpenMVS !!!
Run
bash demo.sh test_folder test_img_name
example:
bash demo_casmvsnet.sh example test0
Our code and dataset are released under the Apache 2.0 license.
This repository is based on Colmap, OpenMVS, CasMVSNet_pl and D2HC-RMVSNet .
TODO:
- Add the complete codes for OpenMVS
- Add SuperPoint for SfM, like https://github.com/cvg/sfm-disambiguation-colmap
- Rifine the mesh building with MVSDF