• Stars
    star
    347
  • Rank 121,413 (Top 3 %)
  • Language
    Python
  • License
    MIT License
  • Created over 3 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

This is an implementation of PIFuhd based on Pytorch

Open-PIFuhd

This is a unofficial implementation of PIFuhd

PIFuHD: Multi-Level Pixel-Aligned Implicit Function forHigh-Resolution 3D Human Digitization(CVPR2020)

Implementation

  • Training Coarse PIFuhd
  • Training Fine PIFuhd
  • Inference
  • metrics(P2S, Normal, Chamfer)
  • Gan generates front normal and back normal (Link)
  • Unsigned distance field and signed distance filed

Note that the pipeline I design do not consider normal map generated by pix2pixHD because it is Not main difficulty we reimplement PIFuHD.

Prerequisites

  • PyTorch>=1.6
  • json
  • PIL
  • skimage
  • tqdm
  • cv2
  • trimesh with pyembree
  • pyexr
  • PyOpenGL
  • freeglut (use sudo apt-get install freeglut3-dev for ubuntu users)
  • (optional) egl related packages for rendering with headless machines. (use apt install libgl1-mesa-dri libegl1-mesa libgbm1 for ubuntu users)
  • face3d

Data processed

We use Render People as our datasets but the data size is 296 (270 for training while 26 for testing) which is less than paper said 500.

Note that we are unable to release the full training data due to the restriction of commertial scans.

Initial data

I modified part codes in PIFu (branch: PIFu-modify, and download it into your project) in order to could process dirs where your model save

bash ./scripts/process_obj.sh [--dir_models_path]
#e.g.  bash ./scripts/process_obj.sh ../Garment/render_people_train/

Rendering data

I modified part codes in PIFu in order to could process dirs where your model save

python -m apps.render_data -i [--dir_models_path] -o [--save_processed_models_path] -s 1024 [Optional: -e]
#-e means use GPU rendering
#e.g.python -m apps.render_data -i ../Garment/render_people_train/ -o ../Garment/render_gen_1024_train/ -s 1024 -e

Render Normal Map

Rendering front and back normal map In Current Project

All config params is set in ./configs/PIFuhd_Render_People_HG_coarse.py, bash ./scripts/generate.sh

# the params you could modify from ./configs/PIFuhd_Render_People_HG_normal_map.py
# the import params here is 
#  e.g. input_dir = '../Garment/render_gen_1024_train/' and cache= "../Garment/cache/render_gen_1024/rp_train/"
# inpud_dir means output render_gen_1024_train
# cache means where save intermediate results like sample points from mesh

After processing all datasets, Tree-Structured Directory looks like following:

render_gen_1024_train/
โ”œโ”€โ”€ rp_aaron_posed_004_BLD
โ”‚ย ย  โ”œโ”€โ”€ GEO
โ”‚ย ย  โ”œโ”€โ”€ MASK
โ”‚ย ย  โ”œโ”€โ”€ PARAM
โ”‚ย ย  โ”œโ”€โ”€ RENDER
โ”‚ย ย  โ”œโ”€โ”€ RENDER_NORMAL
โ”‚ย ย  โ”œโ”€โ”€ UV_MASK
โ”‚ย ย  โ”œโ”€โ”€ UV_NORMAL
โ”‚ย ย  โ”œโ”€โ”€ UV_POS
โ”‚ย ย  โ”œโ”€โ”€ UV_RENDER
โ”‚ย ย  โ””โ”€โ”€ val.txt
โ”œโ”€โ”€ rp_aaron_posed_005_BLD
	....

Training

Training coarse-pifuhd

All config params is set in ./configs/PIFuhd_Render_People_HG_coarse.py, Where you could modify all you want.

Note that this project I designed is friend, which means you could easily replace origin backbone, head by yours :)

bash ./scripts/train_pfhd_coarse.sh

Training Fine-PIFuhd

the same as coarse PIFuhd, all config params is set in ./configs/PIFuhd_Render_People_HG_fine.py,

bash ./scripts/train_pfhd_fine.sh

**If you meet memory problems about GPUs, pls reduce batch_size in ./config/*.py **

Inference

bash ./scripts/test_pfhd_coarse.sh
#or 
bash ./scripts/test_pfhd_fine.sh

the results will be saved into checkpoints/PIFuhd_Render_People_HG_[coarse/fine]/gallery/test/model_name/*.obj, then you could use meshlab to view the generate models.

Metrics

export MESA_GL_VERSION_OVERRIDE=3.3 
# eval coarse-pifuhd
python ./tools/eval_pifu.py  --config ./configs/PIFuhd_Render_People_HG_coarse.py
# eval fine-pifuhd
python ./tools/eval_pifu.py  --config ./configs/PIFuhd_Render_People_HG_fine.py

Pretrained weights

We provide the pretrained models of PIFuhd(fine-pifuhd, coarse-pifuhd)

Note that the training models use front or back normal map rendered from mesh instead of being obtained by GANs. Therefore you need render the normal map of test obj

Demo

we provide rendering code using free models in RenderPeople. This tutorial uses rp_dennis_posed_004 model. Please download the model from this link and unzip the content. Use following command to reconstruct the model:

Debug

I provide bool params(debug in all of config files) to you to check whether your points sampled from mesh is right. There are examples:

Visualization

As following show, left is input image, mid is the results of coarse-pifuhd, right is fine-pifuhd

Reconstruction on Render People Datasets

Note that our training datasets are less than official one(270 for our while 450 for paper) resulting in the performance changes in some degree

IoU ACC recall P2S Normal Chamfer
PIFu 0.748 0.880 0.856 1.801 0.1446 2.00
Coarse-PIFuhd(+Front and back normal) 0.865(5cm) 0.931(5cm) 0.923(5cm) 1.242 0.1205 1.4015
Fine-PIFuhd(+Front and back normal) 0.813(3cm) 0.896(3cm) 0.904(3cm) - 0.1138 -

There is an issue why p2s of fine-pifuhd is bit large than coarse-pifuhd. This is because I do not add some post-processing to clean some chaos in reconstruction. However, the details of human mesh produced by fine-pifuhd are obviously better than coarse-pifuhd.

About Me

I hope that this project could provide some contributions to our communities, especially for implicit-field.

By the way, If you think the project is helpful to you, pls donโ€™t forget to star this project : )

Related Research

Monocular Real-Time Volumetric Performance Capture (ECCV 2020) Ruilong Li*, Yuliang Xiu*, Shunsuke Saito, Zeng Huang, Kyle Olszewski, Hao Li

PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization (CVPR 2020) Shunsuke Saito, Tomas Simon, Jason Saragih, Hanbyul Joo

ARCH: Animatable Reconstruction of Clothed Humans (CVPR 2020) Zeng Huang, Yuanlu Xu, Christoph Lassner, Hao Li, Tony Tung

Robust 3D Self-portraits in Seconds (CVPR 2020) Zhe Li, Tao Yu, Chuanyu Pan, Zerong Zheng, Yebin Liu

Learning to Infer Implicit Surfaces without 3d Supervision (NeurIPS 2019) Shichen Liu, Shunsuke Saito, Weikai Chen, Hao Li

More Repositories

1

Yolo_Nano

Pytorch implementation of yolo_Nano for pedestrian detection
Python
140
star
2

Deeperlab-pytorch

Segmentation realize Deeperlab only segmentation part
Python
138
star
3

OPEC-Net

Peeking into occluded joints: A novel framework for crowd pose estimation(ECCV2020)
Python
130
star
4

RDN-pytorch

Implement RDN with pytorch
Python
50
star
5

LibTorch_RefineDet

RefineDet_API_for_pytorch C++
Makefile
27
star
6

Minimal-Hand

Minimal-Hand based PyTorch (CVPR2020)
Python
27
star
7

Facial_Expression_Similarity

This project aims at providing a fast, modular reference implementation for A Compact Embedding for Facial Expression Similarity models using PyTorch.
Python
17
star
8

ETHSeg

ETHSeg: An Amodel Instance Segmentation Network and a Real-world Dataset for X-Ray Waste Inspection (CVPR2022๏ผ‰
12
star
9

pytorch_cpp

pytorch-using c++API
Makefile
8
star
10

REC-MV

REC-MV: REconstructing 3D Dynamic Cloth from Monucular Videos (CVPR2023)
7
star
11

TSN-pytorch

TSN-net for action recognization, and vote it according the paper
Python
6
star
12

YOLO2-pytorch

Implement yolov-2 with pytorch
Python
5
star
13

pytorch_cpp_API

This_Api_Using a general template to classify task.
Makefile
5
star
14

lingtengqiu.github.io

HTML
3
star
15

minimum-bounding-rectangle-MBR

using c++ realize
Python
3
star
16

FCN-pytorch

FCN-pytorch implement
Python
3
star
17

TSN_dense_flow_process

Here we do pre_process for tsn net to get the filelist for train
C++
2
star
18

Seg_tool_and_mini_rec_create

A tool use for seg label and a algorithm to get the each mask region-rect for xml messiage
Python
2
star
19

reconstruction-of-hands-and-human-survey

1
star
20

RichDreamer

1
star