• Stars
    star
    1,913
  • Rank 23,457 (Top 0.5 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 3 years ago
  • Updated 8 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Code for "NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video", CVPR 2021 oral

NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video

Project Page | Paper


NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video
Jiaming Sun*, Yiming Xie*, Linghao Chen, Xiaowei Zhou, Hujun Bao
CVPR 2021 (Oral Presentation and Best Paper Candidate)

real-time video


TODO List and ETA

  • Code (with detailed comments) for training and inference, and the data preparation scripts (2021-5-2).
  • Pretrained models on ScanNet (2021-5-2).
  • Real-time reconstruction demo on custom ARKit data with instructions (2021-5-7).
  • Evaluation code and metrics (expected 2021-6-10).

How to Use

Installation

# Ubuntu 18.04 and above is recommended.
sudo apt install libsparsehash-dev  # you can try to install sparsehash with conda if you don't have sudo privileges.
conda env create -f environment.yaml
conda activate neucon
[FAQ on environment installation]
  • AttributeError: module 'torchsparse_backend' has no attribute 'hash_forward'

    • Clone torchsparse to a local directory. If you have done that, recompile and install torchsparse after removing the build folder.
  • No sudo privileges to install libsparsehash-dev

    • Install sparsehash in conda (included in environment.yaml) and run export CPLUS_INCLUDE_PATH=$CONDA_PREFIX/include before installing torchsparse.
  • For other problems, you can also refer to the FAQ in torchsparse.

Pretrained Model on ScanNet

Download the pretrained weights and put it under PROJECT_PATH/checkpoints/release. You can also use gdown to download it in command line:

mkdir checkpoints && cd checkpoints
gdown --id 1zKuWqm9weHSm98SZKld1PbEddgLOQkQV

Real-time Demo on Custom Data with Camera Poses from ARKit.

We provide a real-time demo of NeuralRecon running with self-captured ARKit data. Please refer to DEMO.md for details.

Data Preperation for ScanNet

Download and extract ScanNet by following the instructions provided at http://www.scan-net.org/.

[Expected directory structure of ScanNet (click to expand)]

You can obtain the train/val/test split information from here.

DATAROOT
└───scannet
β”‚   └───scans
β”‚   |   └───scene0000_00
β”‚   |       └───color
β”‚   |       β”‚   β”‚   0.jpg
β”‚   |       β”‚   β”‚   1.jpg
β”‚   |       β”‚   β”‚   ...
β”‚   |       β”‚   ...
β”‚   └───scans_test
β”‚   |   └───scene0707_00
β”‚   |       └───color
β”‚   |       β”‚   β”‚   0.jpg
β”‚   |       β”‚   β”‚   1.jpg
β”‚   |       β”‚   β”‚   ...
β”‚   |       β”‚   ...
|   └───scannetv2_test.txt
|   └───scannetv2_train.txt
|   └───scannetv2_val.txt

Next run the data preparation script which parses the raw data format into the processed pickle format. This script also generates the ground truth TSDFs using TSDF Fusion.

[Data preparation script]
# Change PATH_TO_SCANNET and OUTPUT_PATH accordingly.
# For the training/val split:
python tools/tsdf_fusion/generate_gt.py --data_path PATH_TO_SCANNET --save_name all_tsdf_9 --window_size 9
# For the test split
python tools/tsdf_fusion/generate_gt.py --test --data_path PATH_TO_SCANNET --save_name all_tsdf_9 --window_size 9

Inference on ScanNet test-set

python main.py --cfg ./config/test.yaml

The reconstructed meshes will be saved to PROJECT_PATH/results.

Evaluation on ScanNet test-set

python tools/evaluation.py --model ./results/scene_scannet_release_fusion_eval_47 --n_proc 16

Note that evaluation.py uses pyrender to render depth maps from the predicted mesh for 2D evaluation. If you are using headless rendering you must also set the enviroment variable PYOPENGL_PLATFORM=osmesa (see pyrender for more details).

You can print the results of a previous evaluation run using

python tools/visualize_metrics.py --model ./results/scene_scannet_release_fusion_eval_47

Training on ScanNet

Start training by running ./train.sh. More info about training (e.g. GPU requirements, convergence time, etc.) to be added soon.

[train.sh]
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=0,1
python -m torch.distributed.launch --nproc_per_node=2 main.py --cfg ./config/train.yaml

The training is seperated to two phases and the switching between phases is controlled manually for now:

  • Phase 1 (the first 0-20 epoch), training single fragments. MODEL.FUSION.FUSION_ON=False, MODEL.FUSION.FULL=False

  • Phase 2 (the remaining 21-50 epoch), with GRUFusion. MODEL.FUSION.FUSION_ON=True, MODEL.FUSION.FULL=True

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@article{sun2021neucon,
  title={{NeuralRecon}: Real-Time Coherent {3D} Reconstruction from Monocular Video},
  author={Sun, Jiaming and Xie, Yiming and Chen, Linghao and Zhou, Xiaowei and Bao, Hujun},
  journal={CVPR},
  year={2021}
}

Acknowledgment

We would like to specially thank Reviewer 3 for the insightful and constructive comments. We would like to thank Sida Peng , Siyu Zhang and Qi Fang for the proof-reading. Some of the code in this repo is borrowed from MVSNet_pytorch, thanks Xiaoyang!

Copyright

This work is affiliated with ZJU-SenseTime Joint Lab of 3D Vision, and its intellectual property belongs to SenseTime Group Ltd.

Copyright SenseTime. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

More Repositories

1

EasyMocap

Make human motion capture easier.
Python
3,279
star
2

LoFTR

Code for "LoFTR: Detector-Free Local Feature Matching with Transformers", CVPR 2021, T-PAMI 2022
Jupyter Notebook
2,054
star
3

4K4D

[CVPR 2024] 4K4D: Real-Time 4D View Synthesis at 4K Resolution
Python
1,417
star
4

snake

Code for "Deep Snake for Real-Time Instance Segmentation" CVPR 2020 oral
Jupyter Notebook
1,147
star
5

OnePose

Code for "OnePose: One-Shot Object Pose Estimation without CAD Models", CVPR 2022
Python
903
star
6

neuralbody

Code for "Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans" CVPR 2021 best paper candidate
Python
897
star
7

pvnet

Code for "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation" CVPR 2019 oral
Jupyter Notebook
788
star
8

NeuralRecon-W

Code for "Neural 3D Reconstruction in the Wild", SIGGRAPH 2022 (Conference Proceedings)
Python
681
star
9

street_gaussians

Code for "Street Gaussians for Modeling Dynamic Urban Scenes"
576
star
10

mvpose

Code for "Fast and Robust Multi-Person 3D Pose Estimation from Multiple Views" (CVPR 2019, T-PAMI 2021)
Jupyter Notebook
508
star
11

animatable_nerf

Code for "Animatable Implicit Neural Representations for Creating Realistic Avatars from Videos" TPAMI 2024, ICCV 2021
Python
488
star
12

manhattan_sdf

Code for "Neural 3D Scene Reconstruction with the Manhattan-world Assumption" CVPR 2022 Oral
Python
482
star
13

EasyVolcap

[SIGGRAPH Asia 2023 (Technical Communications)] EasyVolcap: Accelerating Neural Volumetric Video Research
Python
461
star
14

ENeRF

SIGGRAPH Asia 2022: Code for "Efficient Neural Radiance Fields for Interactive Free-viewpoint Video"
Python
400
star
15

DetectorFreeSfM

Code for "Detector-Free Structure from Motion", Arxiv Preprint
393
star
16

clean-pvnet

Code for "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation" CVPR 2019 oral
C++
384
star
17

NeuMesh

Code for "MeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing", ECCV 2022 Oral
Python
374
star
18

AutoRecon

Code for "AutoRecon: Automated 3D Object Discovery and Reconstruction" CVPR 2023 (Highlight)
Python
341
star
19

OnePose_Plus_Plus

Code for "OnePose++: Keypoint-Free One-Shot Object Pose Estimation without CAD Models" NeurIPS 2022
Python
329
star
20

object_nerf

Code for "Learning Object-Compositional Neural Radiance Field for Editable Scene Rendering", ICCV 2021
Python
306
star
21

PVIO

Robust and Efficient Visual-Inertial Odometry with Multi-plane Priors
C++
298
star
22

Vox-Fusion

Code for "Dense Tracking and Mapping with Voxel-based Neural Implicit Representation", ISMAR 2022
Python
257
star
23

EfficientLoFTR

Jupyter Notebook
251
star
24

ENFT-SfM

This source code provides a reference implementation for ENFT-SfM.
C++
250
star
25

Wis3D

A web-based 3D visualization tool for 3D computer vision.
TypeScript
248
star
26

SMAP

Code for "SMAP: Single-Shot Multi-Person Absolute 3D Pose Estimation" (ECCV 2020)
Python
237
star
27

mlp_maps

Code for "Representing Volumetric Videos as Dynamic MLP Maps" CVPR 2023
Cuda
230
star
28

im4d

SIGGRAPH Asia 2023: Code for "Im4D: High-Fidelity and Real-Time Novel View Synthesis for Dynamic Scenes"
Python
226
star
29

disprcnn

Code release for Stereo 3D Object Detection via Shape Prior Guided Instance Disparity Estimation (CVPR 2020, TPAMI 2021)
Jupyter Notebook
209
star
30

PVO

code for "PVO: Panoptic Visual Odometry", CVPR 2023
Python
198
star
31

GIFT

Code for "GIFT: Learning Transformation-Invariant Dense Visual Descriptors via Group CNNs" NeurIPS 2019
Python
191
star
32

Mirrored-Human

Code for "Reconstructing 3D Human Pose by Watching Humans in the Mirror" (CVPR 2021 Oral)
184
star
33

pvnet-rendering

render images for pvnet training
Python
177
star
34

IntrinsicNeRF

code for "IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable Novel View Synthesis", ICCV 2023
Python
174
star
35

InvRender

Code for "Modeling Indirect Illumination for Inverse Rendering", CVPR 2022
Python
165
star
36

EIBA

Efficient Incremental BA
C++
161
star
37

instant-nvr

[CVPR 2023] Code for "Learning Neural Volumetric Representations of Dynamic Humans in Minutes"
Python
144
star
38

Monocular_3D_human

137
star
39

eval-vislam

Toolkit for VI-SLAM evaluation.
C++
137
star
40

SINE

Code for "Semantic-driven Image-based NeRF Editing with Prior-guided Editing Field", CVPR 2023
Python
123
star
41

rnin-vio

Python
116
star
42

deltar

Code for "DELTAR: Depth Estimation from a Light-weight ToF Sensor And RGB Image", ECCV 2022
Python
112
star
43

NeuSC

A Temporal Voyage: Code for "Neural Scene Chronology" [CVPR 2023]
Python
111
star
44

DeFlowSLAM

code for "DeFlowSLAM: Self-Supervised Scene Motion Decomposition for Dynamic Dense SLAM"
109
star
45

SegmentBA

Segment based Bundle Adjustment
C++
108
star
46

CoLi-BA

C++
107
star
47

iMoCap

dataset for ECCV 2020 "Motion Capture from Internet Videos"
Python
104
star
48

VS-Net

VS-Net: Voting with Segmentation for Visual Localization
Python
86
star
49

UDOLO

Python
84
star
50

pats

Code for "PATS: Patch Area Transportation with Subdivision for Local Feature Matching", CVPR 2023
C++
84
star
51

SA-HMR

Code for "Learning Human Mesh Recovery in 3D Scenes" CVPR 2023
Python
79
star
52

ENFT

Efficient Non-Consecutive Feature Tracking for Robust SfM http://www.zjucvg.net/ls-acts/ls-acts.html
C++
76
star
53

TotalSelfScan

Code for "TotalSelfScan: Learning Full-body Avatars from Self-Portrait Videos of Faces, Hands, and Bodies" (NeurIPS 2022)
Python
73
star
54

SAM-Graph

Code for "SAM-guided Graph Cut for 3D Instance Segmentation"
69
star
55

gcasp

[CoRL 2022] Generative Category-Level Shape and Pose Estimation with Semantic Primitives
Python
66
star
56

GeneAvatar

Code for "GeneAvatar: Generic Expression-Aware Volumetric Head Avatar Editing from a Single Image", CVPR 2024
59
star
57

zju3dv.github.io

HTML
57
star
58

vig-init

Rapid and Robust Monocular Visual-Inertial Initialization with Gravity Estimation via Vertical Edges
C++
56
star
59

coxgraph

Code for "Coxgraph: Multi-Robot Collaborative, Globally Consistent, Online Dense Reconstruction System", IROS 2021 Best Paper Award Finalist on Safety, Security, and Rescue Robotics in memory of Motohiro Kisoi
C++
54
star
60

RVL-Dynamic

Code for "Prior Guided Dropout for Robust Visual Localization in Dynamic Environments" in ICCV 2019
Python
47
star
61

Vox-Surf

Code for "Vox-Surf: Voxel-based Implicit Surface Representation", TVCG 2022
Python
46
star
62

NIID-Net

Code for "NIID-Net: Adapting Surface Normal Knowledge for Intrinsic Image Decomposition in Indoor Scenes" TVCG
Python
43
star
63

hghoi

ICCV 2023, Hierarchical Generation of Human-Object Interactions with Diffusion Probabilistic Models
C++
43
star
64

RLP_VIO

Code for "RLP-VIO: Robust and lightweight plane-based visual-inertial odometry for augmented reality, CAVW 2022
C++
42
star
65

Mirror-NeRF

Code for "Mirror-NeRF: Learning Neural Radiance Fields for Mirrors with Whitted-Style Ray Tracing", ACM MM 2023
Python
37
star
66

AutoDecomp

3D object discovery from casual object captures
HTML
36
star
67

RelightableAvatar

[CVPR 2024 (Highlight)] Relightable and Animatable Neural Avatar from Sparse-View Video
Python
35
star
68

CloseMoCap

Official implementation of "Reconstructing Close Human Interaction from Multiple Views"
33
star
69

poking_perception

Python
29
star
70

MagLoc-AR

14
star
71

MVN-AFM

Code for "Multi-View Neural 3D Reconstruction of Micro-/Nanostructures with Atomic Force Microscopy"
Python
11
star
72

blink_sim

11
star
73

pvnet-depth-sup

10
star
74

hybrid3d

C++
10
star
75

nr_in_a_room

Code for "Neural Rendering in a Room: Amodal 3D Understanding and Free-Viewpoint Rendering for the Closed Scene Composed of Pre-Captured Objects", ACM ToG
Python
10
star
76

RNNPose

RNNPose: Recurrent 6-DoF Object Pose Refinement with Robust Correspondence Field Estimation and Pose Optimization, CVPR 2022
6
star
77

rnin-vio.github.io

CSS
2
star
78

LSFB

1
star