• Stars
    star
    488
  • Rank 87,433 (Top 2 %)
  • Language
    Python
  • License
    Other
  • Created about 3 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Code for "Animatable Implicit Neural Representations for Creating Realistic Avatars from Videos" TPAMI 2024, ICCV 2021

News

  • 07/09/2022 This repository includes the implementation of Animatable SDF (now dubbed Animatable Neural Fields).
  • 07/09/2022 We release the extended version of Animatable NeRF. We evaluated three different versions of Animatable Neural Fields, including vanilla Animatable NeRF, a version where the neural blend weight field is replaced with displacement field and a version where the canonical NeRF model is replaced with a neural surface field (output is SDF instead of volume density, also using displacement field). We also provide evaluation framework for reconstruction quality comparison.
  • 10/28/2021 To make the comparison with Animatable NeRF easier on the Human3.6M dataset, we save the quantitative results at here, which also contains the results of other methods, including Neural Body, D-NeRF, Multi-view Neural Human Rendering, and Deferred Neural Human Rendering.

Animatable Neural Radiance Fields for Modeling Dynamic Human Bodies

Project Page | Video | Paper | Data | Extension

teaser

Animatable Neural Radiance Fields for Modeling Dynamic Human Bodies
Sida Peng, Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing Shuai, Xiaowei Zhou, Hujun Bao
ICCV 2021

Any questions or discussions are welcomed!

Installation

Please see INSTALL.md for manual installation.

Run the code on Human3.6M

Since the license of Human3.6M dataset does not allow us to distribute its data, we cannot release the processed Human3.6M dataset publicly. If someone is interested at the processed data, please email me.

We provide the pretrained models at here.

Test on Human3.6M

The command lines for test are recorded in test.sh.

Take the test on S9 as an example.

  1. Download the corresponding pretrained models, and put it to $ROOT/data/trained_model/deform/aninerf_s9p/latest.pth and $ROOT/data/trained_model/deform/aninerf_s9p_full/latest.pth.

  2. Test on training human poses:

    python run.py --type evaluate --cfg_file configs/aninerf_s9p.yaml exp_name aninerf_s9p resume True
  3. Test on unseen human poses:

    python run.py --type evaluate --cfg_file configs/aninerf_s9p.yaml exp_name aninerf_s9p_full resume True aninerf_animation True init_aninerf aninerf_s9p test_novel_pose True

Visualization on Human3.6M

Take the visualization on S9 as an example.

  1. Download the corresponding pretrained models, and put it to $ROOT/data/trained_model/deform/aninerf_s9p/latest.pth and $ROOT/data/trained_model/deform/aninerf_s9p_full/latest.pth.

  2. Visualization:

    • Visualize novel views of the 0-th frame
    python run.py --type visualize --cfg_file configs/aninerf_s9p.yaml exp_name aninerf_s9p resume True vis_novel_view True begin_ith_frame 0
    • Visualize views of dynamic humans with 3-th camera
    python run.py --type visualize --cfg_file configs/aninerf_s9p.yaml exp_name aninerf_s9p resume True vis_pose_sequence True test_view "3,"
    • Visualize mesh
    # generate meshes
    python run.py --type visualize --cfg_file configs/aninerf_s9p.yaml exp_name aninerf_s9p vis_posed_mesh True
  3. The results of visualization are located at $ROOT/data/novel_view/aninerf_s9p and $ROOT/data/novel_pose/aninerf_s9p.

Training on Human3.6M

Take the training on S9 as an example. The command lines for training are recorded in train.sh.

  1. Train:

    # training
    python train_net.py --cfg_file configs/aninerf_s9p.yaml exp_name aninerf_s9p resume False
    
    # training the blend weight fields of unseen human poses
    python train_net.py --cfg_file configs/aninerf_s9p.yaml exp_name aninerf_s9p_full resume False aninerf_animation True init_aninerf aninerf_s9p
  2. Tensorboard:

    tensorboard --logdir data/record/deform

Run the code on ZJU-MoCap

If someone wants to download the ZJU-Mocap dataset, please fill in the agreement, and email me ([email protected]) and cc Xiaowei Zhou ([email protected]) to request the download link.

We provide the pretrained models at here.

Test on ZJU-MoCap

The command lines for test are recorded in test.sh.

Take the test on 313 as an example.

  1. Download the corresponding pretrained models, and put it to $ROOT/data/trained_model/deform/aninerf_313/latest.pth and $ROOT/data/trained_model/deform/aninerf_313_full/latest.pth.

  2. Test on training human poses:

    python run.py --type evaluate --cfg_file configs/aninerf_313.yaml exp_name aninerf_313 resume True
  3. Test on unseen human poses:

    python run.py --type evaluate --cfg_file configs/aninerf_313.yaml exp_name aninerf_313_full resume True aninerf_animation True init_aninerf aninerf_313 test_novel_pose True

Visualization on ZJU-MoCap

Take the visualization on 313 as an example.

  1. Download the corresponding pretrained models, and put it to $ROOT/data/trained_model/deform/aninerf_313/latest.pth and $ROOT/data/trained_model/deform/aninerf_313_full/latest.pth.

  2. Visualization:

    • Visualize novel views of the 0-th frame
    python run.py --type visualize --cfg_file configs/aninerf_313.yaml exp_name aninerf_313 resume True vis_novel_view True begin_ith_frame 0
    • Visualize views of dynamic humans with 0-th camera
    python run.py --type visualize --cfg_file configs/aninerf_313.yaml exp_name aninerf_313 resume True vis_pose_sequence True test_view "0,"
    • Visualize mesh
    # generate meshes
    python run.py --type visualize --cfg_file configs/aninerf_313.yaml exp_name aninerf_313 vis_posed_mesh True
  3. The results of visualization are located at $ROOT/data/novel_view/aninerf_313 and $ROOT/data/novel_pose/aninerf_313.

Training on ZJU-MoCap

Take the training on 313 as an example. The command lines for training are recorded in train.sh.

  1. Train:

    # training
    python train_net.py --cfg_file configs/aninerf_313.yaml exp_name aninerf_313 resume False
    
    # training the blend weight fields of unseen human poses
    python train_net.py --cfg_file configs/aninerf_313.yaml exp_name aninerf_313_full resume False aninerf_animation True init_aninerf aninerf_313
  2. Tensorboard:

    tensorboard --logdir data/record/deform

Extended Version

Addtional training and test commandlines are recorded in train.sh and test.sh.

Moreover, we compiled a list of all possible commands to run in extension.sh using on the S9 sequence of the Human3.6M dataset.

This include training, evaluating and visualizing the original Animatable NeRF implementation and all three extented versions.

Here we list the portion of the commands for the SDF-PDF configuration:

# extension: anisdf_pdf

# evaluating on training poses for anisdf_pdf
python run.py --type evaluate --cfg_file configs/sdf_pdf/anisdf_pdf_s9p.yaml exp_name anisdf_pdf_s9p resume True

# evaluating on novel poses for anisdf_pdf
python run.py --type evaluate --cfg_file configs/sdf_pdf/anisdf_pdf_s9p.yaml exp_name anisdf_pdf_s9p resume True test_novel_pose True

# visualizing novel view of 0th frame for anisdf_pdf
python run.py --type visualize --cfg_file configs/sdf_pdf/anisdf_pdf_s9p.yaml exp_name anisdf_pdf_s9p resume True vis_novel_view True begin_ith_frame 0

# visualizing animation of 3rd camera for anisdf_pdf
python run.py --type visualize --cfg_file configs/sdf_pdf/anisdf_pdf_s9p.yaml exp_name anisdf_pdf_s9p resume True vis_pose_sequence True test_view "3,"

# generating posed mesh for anisdf_pdf
python run.py --type visualize --cfg_file configs/sdf_pdf/anisdf_pdf_s9p.yaml exp_name anisdf_pdf_s9p vis_posed_mesh True

# training base model for anisdf_pdf
python train_net.py --cfg_file configs/sdf_pdf/anisdf_pdf_s9p.yaml exp_name anisdf_pdf_s9p resume False

To run Animatable NeRF on other officially supported datasets, simply change the --cfg_file and exp_name parameters.

Note that for Animatable NeRF with pose-dependent displacement field (NeRF-PDF) and Animatable Neural Surface with pose-dependent displacement field (SDF-PDF), there's no need for training the blend weight fields of unseen human poses.

MonoCap dataset

MonoCap is a dataset composed by authors of animatable sdf from DeepCap and DynaCap.

Since the license of DeepCap and DynaCap dataset does not allow us to distribute its data, we cannot release the processed MonoCap dataset publicly. If you are interested in the processed data, please download the raw data from here and email me for instructions on how to process the data.

SyntheticHuman Dataset

SyntheticHuman is a dataset created by authors of animatable sdf. It contains multi-view videos of 3D human rendered from characters in the RenderPeople dataset along with the groud truth 3D model.

Since the license of the RenderPeople dataset does not allow distribution of the 3D model, we cannot realease the processed SyntheticHuman dataset publicly. If you are interested in this dataset, please email me for instructions on how to generate the data.

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@inproceedings{peng2021animatable,
  title={Animatable Neural Radiance Fields for Modeling Dynamic Human Bodies},
  author={Peng, Sida and Dong, Junting and Wang, Qianqian and Zhang, Shangzhan and Shuai, Qing and Zhou, Xiaowei and Bao, Hujun},
  booktitle={ICCV},
  year={2021}
}

More Repositories

1

EasyMocap

Make human motion capture easier.
Python
3,279
star
2

LoFTR

Code for "LoFTR: Detector-Free Local Feature Matching with Transformers", CVPR 2021, T-PAMI 2022
Jupyter Notebook
2,054
star
3

NeuralRecon

Code for "NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video", CVPR 2021 oral
Python
1,913
star
4

4K4D

[CVPR 2024] 4K4D: Real-Time 4D View Synthesis at 4K Resolution
Python
1,417
star
5

snake

Code for "Deep Snake for Real-Time Instance Segmentation" CVPR 2020 oral
Jupyter Notebook
1,147
star
6

OnePose

Code for "OnePose: One-Shot Object Pose Estimation without CAD Models", CVPR 2022
Python
903
star
7

neuralbody

Code for "Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans" CVPR 2021 best paper candidate
Python
897
star
8

pvnet

Code for "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation" CVPR 2019 oral
Jupyter Notebook
788
star
9

NeuralRecon-W

Code for "Neural 3D Reconstruction in the Wild", SIGGRAPH 2022 (Conference Proceedings)
Python
681
star
10

street_gaussians

Code for "Street Gaussians for Modeling Dynamic Urban Scenes"
576
star
11

mvpose

Code for "Fast and Robust Multi-Person 3D Pose Estimation from Multiple Views" (CVPR 2019, T-PAMI 2021)
Jupyter Notebook
508
star
12

manhattan_sdf

Code for "Neural 3D Scene Reconstruction with the Manhattan-world Assumption" CVPR 2022 Oral
Python
482
star
13

EasyVolcap

[SIGGRAPH Asia 2023 (Technical Communications)] EasyVolcap: Accelerating Neural Volumetric Video Research
Python
461
star
14

ENeRF

SIGGRAPH Asia 2022: Code for "Efficient Neural Radiance Fields for Interactive Free-viewpoint Video"
Python
400
star
15

DetectorFreeSfM

Code for "Detector-Free Structure from Motion", Arxiv Preprint
393
star
16

clean-pvnet

Code for "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation" CVPR 2019 oral
C++
384
star
17

NeuMesh

Code for "MeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing", ECCV 2022 Oral
Python
374
star
18

AutoRecon

Code for "AutoRecon: Automated 3D Object Discovery and Reconstruction" CVPR 2023 (Highlight)
Python
341
star
19

OnePose_Plus_Plus

Code for "OnePose++: Keypoint-Free One-Shot Object Pose Estimation without CAD Models" NeurIPS 2022
Python
329
star
20

object_nerf

Code for "Learning Object-Compositional Neural Radiance Field for Editable Scene Rendering", ICCV 2021
Python
306
star
21

PVIO

Robust and Efficient Visual-Inertial Odometry with Multi-plane Priors
C++
298
star
22

Vox-Fusion

Code for "Dense Tracking and Mapping with Voxel-based Neural Implicit Representation", ISMAR 2022
Python
257
star
23

EfficientLoFTR

Jupyter Notebook
251
star
24

ENFT-SfM

This source code provides a reference implementation for ENFT-SfM.
C++
250
star
25

Wis3D

A web-based 3D visualization tool for 3D computer vision.
TypeScript
248
star
26

SMAP

Code for "SMAP: Single-Shot Multi-Person Absolute 3D Pose Estimation" (ECCV 2020)
Python
237
star
27

mlp_maps

Code for "Representing Volumetric Videos as Dynamic MLP Maps" CVPR 2023
Cuda
230
star
28

im4d

SIGGRAPH Asia 2023: Code for "Im4D: High-Fidelity and Real-Time Novel View Synthesis for Dynamic Scenes"
Python
226
star
29

disprcnn

Code release for Stereo 3D Object Detection via Shape Prior Guided Instance Disparity Estimation (CVPR 2020, TPAMI 2021)
Jupyter Notebook
209
star
30

PVO

code for "PVO: Panoptic Visual Odometry", CVPR 2023
Python
198
star
31

GIFT

Code for "GIFT: Learning Transformation-Invariant Dense Visual Descriptors via Group CNNs" NeurIPS 2019
Python
191
star
32

Mirrored-Human

Code for "Reconstructing 3D Human Pose by Watching Humans in the Mirror" (CVPR 2021 Oral)
184
star
33

pvnet-rendering

render images for pvnet training
Python
177
star
34

IntrinsicNeRF

code for "IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable Novel View Synthesis", ICCV 2023
Python
174
star
35

InvRender

Code for "Modeling Indirect Illumination for Inverse Rendering", CVPR 2022
Python
165
star
36

EIBA

Efficient Incremental BA
C++
161
star
37

instant-nvr

[CVPR 2023] Code for "Learning Neural Volumetric Representations of Dynamic Humans in Minutes"
Python
144
star
38

Monocular_3D_human

137
star
39

eval-vislam

Toolkit for VI-SLAM evaluation.
C++
137
star
40

SINE

Code for "Semantic-driven Image-based NeRF Editing with Prior-guided Editing Field", CVPR 2023
Python
123
star
41

rnin-vio

Python
116
star
42

deltar

Code for "DELTAR: Depth Estimation from a Light-weight ToF Sensor And RGB Image", ECCV 2022
Python
112
star
43

NeuSC

A Temporal Voyage: Code for "Neural Scene Chronology" [CVPR 2023]
Python
111
star
44

DeFlowSLAM

code for "DeFlowSLAM: Self-Supervised Scene Motion Decomposition for Dynamic Dense SLAM"
109
star
45

SegmentBA

Segment based Bundle Adjustment
C++
108
star
46

CoLi-BA

C++
107
star
47

iMoCap

dataset for ECCV 2020 "Motion Capture from Internet Videos"
Python
104
star
48

VS-Net

VS-Net: Voting with Segmentation for Visual Localization
Python
86
star
49

UDOLO

Python
84
star
50

pats

Code for "PATS: Patch Area Transportation with Subdivision for Local Feature Matching", CVPR 2023
C++
84
star
51

SA-HMR

Code for "Learning Human Mesh Recovery in 3D Scenes" CVPR 2023
Python
79
star
52

ENFT

Efficient Non-Consecutive Feature Tracking for Robust SfM http://www.zjucvg.net/ls-acts/ls-acts.html
C++
76
star
53

TotalSelfScan

Code for "TotalSelfScan: Learning Full-body Avatars from Self-Portrait Videos of Faces, Hands, and Bodies" (NeurIPS 2022)
Python
73
star
54

SAM-Graph

Code for "SAM-guided Graph Cut for 3D Instance Segmentation"
69
star
55

gcasp

[CoRL 2022] Generative Category-Level Shape and Pose Estimation with Semantic Primitives
Python
66
star
56

GeneAvatar

Code for "GeneAvatar: Generic Expression-Aware Volumetric Head Avatar Editing from a Single Image", CVPR 2024
59
star
57

zju3dv.github.io

HTML
57
star
58

vig-init

Rapid and Robust Monocular Visual-Inertial Initialization with Gravity Estimation via Vertical Edges
C++
56
star
59

coxgraph

Code for "Coxgraph: Multi-Robot Collaborative, Globally Consistent, Online Dense Reconstruction System", IROS 2021 Best Paper Award Finalist on Safety, Security, and Rescue Robotics in memory of Motohiro Kisoi
C++
54
star
60

RVL-Dynamic

Code for "Prior Guided Dropout for Robust Visual Localization in Dynamic Environments" in ICCV 2019
Python
47
star
61

Vox-Surf

Code for "Vox-Surf: Voxel-based Implicit Surface Representation", TVCG 2022
Python
46
star
62

NIID-Net

Code for "NIID-Net: Adapting Surface Normal Knowledge for Intrinsic Image Decomposition in Indoor Scenes" TVCG
Python
43
star
63

hghoi

ICCV 2023, Hierarchical Generation of Human-Object Interactions with Diffusion Probabilistic Models
C++
43
star
64

RLP_VIO

Code for "RLP-VIO: Robust and lightweight plane-based visual-inertial odometry for augmented reality, CAVW 2022
C++
42
star
65

Mirror-NeRF

Code for "Mirror-NeRF: Learning Neural Radiance Fields for Mirrors with Whitted-Style Ray Tracing", ACM MM 2023
Python
37
star
66

AutoDecomp

3D object discovery from casual object captures
HTML
36
star
67

RelightableAvatar

[CVPR 2024 (Highlight)] Relightable and Animatable Neural Avatar from Sparse-View Video
Python
35
star
68

CloseMoCap

Official implementation of "Reconstructing Close Human Interaction from Multiple Views"
33
star
69

poking_perception

Python
29
star
70

MagLoc-AR

14
star
71

MVN-AFM

Code for "Multi-View Neural 3D Reconstruction of Micro-/Nanostructures with Atomic Force Microscopy"
Python
11
star
72

blink_sim

11
star
73

pvnet-depth-sup

10
star
74

hybrid3d

C++
10
star
75

nr_in_a_room

Code for "Neural Rendering in a Room: Amodal 3D Understanding and Free-Viewpoint Rendering for the Closed Scene Composed of Pre-Captured Objects", ACM ToG
Python
10
star
76

RNNPose

RNNPose: Recurrent 6-DoF Object Pose Refinement with Robust Correspondence Field Estimation and Pose Optimization, CVPR 2022
6
star
77

rnin-vio.github.io

CSS
2
star
78

LSFB

1
star