• Stars
    star
    681
  • Rank 64,040 (Top 2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created almost 2 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Code for "Neural 3D Reconstruction in the Wild", SIGGRAPH 2022 (Conference Proceedings)

Neural 3D Reconstruction in the Wild

Project Page | Paper


Neural 3D Reconstruction in the Wild
Jiaming Sun, Xi Chen, Qianqian Wang, Zhengqi Li, Hadar Averbuch-Elor, Xiaowei Zhou, Noah Snavely
SIGGRAPH 2022 (Conference Proceedings)

demo_vid

TODO List

  • Training (i.e., reconstruction) code.
  • Toolkit and pipeline to reproduce the evaluation results on the proposed Heritage-Recon dataset.
  • Config for reconstructing generic outdoor/indoor scenes.

Installation

conda env create -f environment.yaml
conda activate neuconw
scripts/download_sem_model.sh

Reproduce reconstruction results on Heritage-Recon

Dataset setup

Download the Heritage-Recon dataset and put it under data. You can also use gdown to download it in command line:

mkdir data && cd data
gdown --id 1eZvmk4GQkrRKUNZpagZEIY_z8Lsdw94v

Generate ray cache for all four scenes:

for SCENE_NAME in brandenburg_gate lincoln_memorial palacio_de_bellas_artes pantheon_exterior; do
  scripts/data_generation.sh data/heritage-recon/${SCENE_NAME}
done

Training

To train scenes in our Heritage-Recon dataset:

# Subsutitude `SCENE_NAME` with the scene you want to reconstruct.
scripts/train.sh $EXP_NAME config/train_${SCENE_NAME}.yaml $NUM_GPU $NUM_NODE

Evaluating

First, extracting mesh from a checkpoint you want to evaluate:

scripts/sdf_extract.sh $EXP_NAME config/train_${SCENE_NAME}.yaml $CKPT_PATH 10

The reconstructed meshes will be saved to PROJECT_PATH/results.

Then run the evaluation pipeline:

scripts/eval_pipeline.sh $SCENE_NAME $MESH_PATH

Evaluation results will be saved in the same folder with the evaluated mesh.

Reconstructing custom data

Data preparation

Automatic generation

The code takes a standard COLMAP workspace format as input, a script is provided for automatically convert a colmap workspace into our data format:

scripts/preprocess_data.sh

More instructions can be found in scripts/preprocess_data.sh

Manual selection

However, if you wish to select a better bounding box (i.e., reconstruction region) manually, do the following steps.

1. Generate semantic maps

Generate semantic maps:

python tools/prepare_data/prepare_semantic_maps.py --root_dir $WORKSPACE_PATH --gpu 0

2. Create scene metadata file

Create a file config.yaml into workspace to write metadata. The target scene needs to be normalized into a unit sphere, which require manual selection. One simple way is to use SFM key-points points from COLMAP to determine the origin and radius. Also, a bounding box is required, which can be set to [origin-raidus, origin+radius], or only the region you're interested in.

{
    name: brandenburg_gate, # scene name
    origin: [ 0.568699, -0.0935532, 6.28958 ], 
    radius: 4.6,
    eval_bbx: [[-14.95992661, -1.97035599, -16.59869957],[48.60944366, 30.66258621, 12.81980324]],
    voxel_size: 0.25,
    min_track_length: 10,
    # The following configuration is only used in evaluation, can be ignored for your own scene
    sfm2gt: [[1, 0, 0, 0],
            [ 0, 1, 0, 0],
            [ 0, 0, 1, 0],
            [ 0, 0, 0, 1]],
}

3. Generate cache

Run the following command with a WORKSPACE_PATH specified:

scripts/data_generation.sh $WORKSPACE_PATH

After completing above steps, whether automatically or manually, the COLMAP workspace should be looking like this;

└── brandenburg_gate
  └── brandenburg_gate.tsv
  ├── cache_sgs
    └── splits
        ├── rays1_meta_info.json
        ├── rgbs1_meta_info.json
        ├── split_0
            ├── rays1.h5
            └── rgbs1.h5
        ├── split_1
        ├──.....
  ├── config.yaml
  ├── dense
    └── sparse
        ├── cameras.bin
        ├── images.bin
        ├── points3D.bin
  └── semantic_maps
      ├── 99119670_397881696.jpg
      ├── 99128562_6434086647.jpg
      ├── 99250931_9123849334.jpg
      ├── 99388860_2887395078.jpg
      ├──.....

Training

Change DATASET.ROOT_DIR to COLMAP workspace path in config/train.yaml, and run:

scripts/train.sh $EXP_NAME config/train.yaml $NUM_GPU $NUM_NODE

Additionally, NEUCONW.SDF_CONFIG.inside_outside should be set to True if training an indoor scene (refer to config/train_indoor.yaml).

Extracting mesh

scripts/sdf_extract.sh $EXP_NAME config/train.yaml $CKPT_PATH $EVAL_LEVEL

The reconstructed meshes will be saved to PROJECT_PATH/results.

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@inproceedings{sun2022neuconw,
  title={Neural {3D} Reconstruction in the Wild},
  author={Sun, Jiaming and Chen, Xi and Wang, Qianqian and Li, Zhengqi and Averbuch-Elor, Hadar and Zhou, Xiaowei and Snavely, Noah},
  booktitle={SIGGRAPH Conference Proceedings},
  year={2022}
}

Acknowledgement

Part of our code is derived from nerf_pl and NeuS, thanks to their authors for the great works.

More Repositories

1

EasyMocap

Make human motion capture easier.
Python
3,279
star
2

LoFTR

Code for "LoFTR: Detector-Free Local Feature Matching with Transformers", CVPR 2021, T-PAMI 2022
Jupyter Notebook
2,054
star
3

NeuralRecon

Code for "NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video", CVPR 2021 oral
Python
1,913
star
4

4K4D

[CVPR 2024] 4K4D: Real-Time 4D View Synthesis at 4K Resolution
Python
1,417
star
5

snake

Code for "Deep Snake for Real-Time Instance Segmentation" CVPR 2020 oral
Jupyter Notebook
1,142
star
6

OnePose

Code for "OnePose: One-Shot Object Pose Estimation without CAD Models", CVPR 2022
Python
903
star
7

neuralbody

Code for "Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans" CVPR 2021 best paper candidate
Python
897
star
8

pvnet

Code for "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation" CVPR 2019 oral
Jupyter Notebook
788
star
9

street_gaussians

Code for "Street Gaussians for Modeling Dynamic Urban Scenes"
576
star
10

mvpose

Code for "Fast and Robust Multi-Person 3D Pose Estimation from Multiple Views" (CVPR 2019, T-PAMI 2021)
Jupyter Notebook
504
star
11

animatable_nerf

Code for "Animatable Implicit Neural Representations for Creating Realistic Avatars from Videos" TPAMI 2024, ICCV 2021
Python
488
star
12

manhattan_sdf

Code for "Neural 3D Scene Reconstruction with the Manhattan-world Assumption" CVPR 2022 Oral
Python
482
star
13

EasyVolcap

[SIGGRAPH Asia 2023 (Technical Communications)] EasyVolcap: Accelerating Neural Volumetric Video Research
Python
461
star
14

ENeRF

SIGGRAPH Asia 2022: Code for "Efficient Neural Radiance Fields for Interactive Free-viewpoint Video"
Python
400
star
15

DetectorFreeSfM

Code for "Detector-Free Structure from Motion", Arxiv Preprint
393
star
16

clean-pvnet

Code for "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation" CVPR 2019 oral
C++
384
star
17

NeuMesh

Code for "MeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing", ECCV 2022 Oral
Python
374
star
18

AutoRecon

Code for "AutoRecon: Automated 3D Object Discovery and Reconstruction" CVPR 2023 (Highlight)
Python
341
star
19

OnePose_Plus_Plus

Code for "OnePose++: Keypoint-Free One-Shot Object Pose Estimation without CAD Models" NeurIPS 2022
Python
329
star
20

object_nerf

Code for "Learning Object-Compositional Neural Radiance Field for Editable Scene Rendering", ICCV 2021
Python
306
star
21

PVIO

Robust and Efficient Visual-Inertial Odometry with Multi-plane Priors
C++
298
star
22

Vox-Fusion

Code for "Dense Tracking and Mapping with Voxel-based Neural Implicit Representation", ISMAR 2022
Python
257
star
23

EfficientLoFTR

Jupyter Notebook
251
star
24

ENFT-SfM

This source code provides a reference implementation for ENFT-SfM.
C++
250
star
25

Wis3D

A web-based 3D visualization tool for 3D computer vision.
TypeScript
248
star
26

SMAP

Code for "SMAP: Single-Shot Multi-Person Absolute 3D Pose Estimation" (ECCV 2020)
Python
237
star
27

mlp_maps

Code for "Representing Volumetric Videos as Dynamic MLP Maps" CVPR 2023
Cuda
230
star
28

im4d

SIGGRAPH Asia 2023: Code for "Im4D: High-Fidelity and Real-Time Novel View Synthesis for Dynamic Scenes"
Python
226
star
29

disprcnn

Code release for Stereo 3D Object Detection via Shape Prior Guided Instance Disparity Estimation (CVPR 2020, TPAMI 2021)
Jupyter Notebook
211
star
30

PVO

code for "PVO: Panoptic Visual Odometry", CVPR 2023
Python
198
star
31

GIFT

Code for "GIFT: Learning Transformation-Invariant Dense Visual Descriptors via Group CNNs" NeurIPS 2019
Python
190
star
32

Mirrored-Human

Code for "Reconstructing 3D Human Pose by Watching Humans in the Mirror" (CVPR 2021 Oral)
184
star
33

pvnet-rendering

render images for pvnet training
Python
177
star
34

IntrinsicNeRF

code for "IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable Novel View Synthesis", ICCV 2023
Python
174
star
35

InvRender

Code for "Modeling Indirect Illumination for Inverse Rendering", CVPR 2022
Python
165
star
36

EIBA

Efficient Incremental BA
C++
161
star
37

instant-nvr

[CVPR 2023] Code for "Learning Neural Volumetric Representations of Dynamic Humans in Minutes"
Python
144
star
38

Monocular_3D_human

137
star
39

eval-vislam

Toolkit for VI-SLAM evaluation.
C++
137
star
40

SINE

Code for "Semantic-driven Image-based NeRF Editing with Prior-guided Editing Field", CVPR 2023
Python
123
star
41

rnin-vio

Python
116
star
42

deltar

Code for "DELTAR: Depth Estimation from a Light-weight ToF Sensor And RGB Image", ECCV 2022
Python
112
star
43

NeuSC

A Temporal Voyage: Code for "Neural Scene Chronology" [CVPR 2023]
Python
111
star
44

DeFlowSLAM

code for "DeFlowSLAM: Self-Supervised Scene Motion Decomposition for Dynamic Dense SLAM"
109
star
45

SegmentBA

Segment based Bundle Adjustment
C++
108
star
46

CoLi-BA

C++
107
star
47

iMoCap

dataset for ECCV 2020 "Motion Capture from Internet Videos"
Python
104
star
48

VS-Net

VS-Net: Voting with Segmentation for Visual Localization
Python
86
star
49

UDOLO

Python
84
star
50

pats

Code for "PATS: Patch Area Transportation with Subdivision for Local Feature Matching", CVPR 2023
C++
84
star
51

SA-HMR

Code for "Learning Human Mesh Recovery in 3D Scenes" CVPR 2023
Python
79
star
52

ENFT

Efficient Non-Consecutive Feature Tracking for Robust SfM http://www.zjucvg.net/ls-acts/ls-acts.html
C++
76
star
53

TotalSelfScan

Code for "TotalSelfScan: Learning Full-body Avatars from Self-Portrait Videos of Faces, Hands, and Bodies" (NeurIPS 2022)
Python
73
star
54

SAM-Graph

Code for "SAM-guided Graph Cut for 3D Instance Segmentation"
69
star
55

gcasp

[CoRL 2022] Generative Category-Level Shape and Pose Estimation with Semantic Primitives
Python
66
star
56

GeneAvatar

Code for "GeneAvatar: Generic Expression-Aware Volumetric Head Avatar Editing from a Single Image", CVPR 2024
59
star
57

zju3dv.github.io

HTML
57
star
58

vig-init

Rapid and Robust Monocular Visual-Inertial Initialization with Gravity Estimation via Vertical Edges
C++
56
star
59

coxgraph

Code for "Coxgraph: Multi-Robot Collaborative, Globally Consistent, Online Dense Reconstruction System", IROS 2021 Best Paper Award Finalist on Safety, Security, and Rescue Robotics in memory of Motohiro Kisoi
C++
54
star
60

RVL-Dynamic

Code for "Prior Guided Dropout for Robust Visual Localization in Dynamic Environments" in ICCV 2019
Python
47
star
61

Vox-Surf

Code for "Vox-Surf: Voxel-based Implicit Surface Representation", TVCG 2022
Python
46
star
62

NIID-Net

Code for "NIID-Net: Adapting Surface Normal Knowledge for Intrinsic Image Decomposition in Indoor Scenes" TVCG
Python
43
star
63

hghoi

ICCV 2023, Hierarchical Generation of Human-Object Interactions with Diffusion Probabilistic Models
C++
43
star
64

RLP_VIO

Code for "RLP-VIO: Robust and lightweight plane-based visual-inertial odometry for augmented reality, CAVW 2022
C++
42
star
65

Mirror-NeRF

Code for "Mirror-NeRF: Learning Neural Radiance Fields for Mirrors with Whitted-Style Ray Tracing", ACM MM 2023
Python
37
star
66

AutoDecomp

3D object discovery from casual object captures
HTML
36
star
67

RelightableAvatar

[CVPR 2024 (Highlight)] Relightable and Animatable Neural Avatar from Sparse-View Video
Python
35
star
68

CloseMoCap

Official implementation of "Reconstructing Close Human Interaction from Multiple Views"
33
star
69

poking_perception

Python
29
star
70

MagLoc-AR

14
star
71

MVN-AFM

Code for "Multi-View Neural 3D Reconstruction of Micro-/Nanostructures with Atomic Force Microscopy"
Python
11
star
72

blink_sim

11
star
73

pvnet-depth-sup

10
star
74

hybrid3d

C++
10
star
75

nr_in_a_room

Code for "Neural Rendering in a Room: Amodal 3D Understanding and Free-Viewpoint Rendering for the Closed Scene Composed of Pre-Captured Objects", ACM ToG
Python
10
star
76

RNNPose

RNNPose: Recurrent 6-DoF Object Pose Refinement with Robust Correspondence Field Estimation and Pose Optimization, CVPR 2022
6
star
77

rnin-vio.github.io

CSS
2
star
78

LSFB

1
star