• Stars
    star
    208
  • Rank 189,015 (Top 4 %)
  • Language
    Python
  • License
    Other
  • Created over 3 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Robust Model-based Face Reconstruction through Weakly-Supervised Outlier Segmentation

Chunlu Li, Andreas Morel-Forster, Thomas Vetter, Bernhard Egger*, and Adam Kortylewski*

pdf | video

This work enables a model-based face autoencoder to segment occlusions accurately for 3D face reconstruction and provides state-of-the-art occlusion segmentation results and the face reconstruction is robust to occlusions. It requires only weak supervision for the face reconstruction subnetwork and can be trained end-to-end efficiently. The effectiveness of this method is verified on the Celeb A HQ dataset, the AR dataset, and the NoW Challenge.

● [Update 20230331] Docker image with trained model available now!

● [Update 20230321] Accepted by CVPR 2023!

  1. Docker with pre-trained model coming soon.

● [Update 20210315] Reached the SOTA on the NoW Challenge!

  1. The ArcFace for perceptual-level loss.

  2. Better tuned hyper-parameters for higher reconstruction accuracy.

  3. Test and evaluation code released. 3D shape (.obj mesh), rendered faces, and estimated masks available. Evaluation indices (accuracy, precision, F1 socre, and recall rate) available.

Features

● Accurate Occlusion Segmentation

This method provides reliable occlusion segmentation masks and the training of the segmentation network does not require any additional supervision.

● Occlusion-robust 3D Reconstruction

This method produces accurate 3D face model fitting results which are robust to occlusions.

[New!] Our method, named 'FOCUS' (Face-autoencoder and OCclUsion Segmentation), reaches the SOTA on the NoW Challenge!

The results of the state-of-the-art methods on the NoW face benchmark is as follows:

Rank Method Median(mm) Mean(mm) Std(mm)
1. FOCUS (Ours) 1.04 1.30 1.10
2. DECA[Feng et al., SIGGRAPH 2021] 1.09 1.38 1.18
3. Deep3DFace PyTorch [Deng et al., CVPRW 2019] 1.11 1.41 1.21
4. RingNet [Sanyal et al., CVPR 2019] 1.21 1.53 1.31
5. Deep3DFace [Deng et al., CVPRW 2019] 1.23 1.54 1.29
6. 3DDFA-V2 [Guo et al., ECCV 2020] 1.23 1.57 1.39
7. MGCNet [Shang et al., ECCV 2020] 1.31 1.87 2.63
8. PRNet [Feng et al., ECCV 2018] 1.50 1.98 1.88
9. 3DMM-CNN [Tran et al., CVPR 2017] 1.84 2.33 2.05

For more details about the evaluation, check Now Challenge website.

● Easy to implement

This method follows a step-wise manner and is easy to implement.

Getting Started

To train and/or test this work, you need to:

  1. Prepare the data

  2. Download 3DMM

  3. Install Arcface

  4. Install Dependencies

  5. Train step-by-step

  6. Test

● Data Preparation

  1. Prepare .csv files for the training set, validation set, and testing set.

    The .csv files should contain rows of [filename + landmark coordinates].

    We recommend using the 68 2D landmarks detected by 2D-and-3D-face-alignment.

  2. To evaluate the accuracy of the estimated masks, ground truth occlusion segmentation masks are required. Please name the target image as 'image_name.jpg' and ground truth masks as 'image_name_visible_skin_mask.png'.

    The image directory should follow the structure below:

     ./image_root
     β”œβ”€β”€ Dataset                     # Database folder containing the train set, validation set, and test set.
         β”œβ”€β”€1.jpg                    # Target image
         β”œβ”€β”€1_visible_skin_mask.png  # GT masks for testing. (optional for training)
         └──...
     β”œβ”€β”€ train_landmarks.csv      # .csv file for the train set.
     β”œβ”€β”€ test_landmarks.csv       # .csv file for the test set.
     β”œβ”€β”€ val_landmarks.csv        # .csv file for the validation set.
     └── all_landmarks.csv        # .csv file for the whole dataset. (optional)
    

● Download 3DMM

  1. Our implementation employs the BFM 2017. Please copy 'model2017-1_bfm_nomouth.h5' to './basel_3DMM'.

● Install ArcFace for Perceptual Loss

We depend on ArcFace to compute the perceptual features for the target images and the rendered image.

  1. Download the trained model.

  2. Place ms1mv3_arcface_r50_fp16.zip and backbone.pth under ./Occlusion_Robust_MoFA/models/.

  3. To install the ArcFace, please run the following code:

cd ./Occlusion_Robust_MoFA
git clone https://github.com/deepinsight/insightface.git
cp -r ./insightface/recognition/arcface_torch/* ./models/
  1. Overwrite './models/backbones/iresnet.py' with the file in our repository.

The structure of the directory 'models' should be:

	./models
	β”œβ”€β”€ ms1mv3_arcface_r50_fp16
	    β”œβ”€β”€backbone.pth
	    └──...                       # Trained model downloaded.
	β”œβ”€β”€ backbones
	    β”œβ”€β”€*iresnet.py               # Overwritten by our code.
	    └──...
	└── ...                          # files/directories downloaded from ArcFace repo.

● Install Dependencies

We recommend using anaconda or miniconda to create virtue environment and install the packages. You can set up the environment with the following commands:

conda create -n FOCUS python=3.6
conda activate FOCUS
pip install -r requirements.txt

● Step-wise Training

To train the proposed network, please follow the steps:

  1. Enter the directory
cd ./Occlusion_Robust_MoFA
  1. Unsupervised Initialization
python Step1_Pretrain_MoFA.py --img_path ./image_root/Dataset
  1. Generate UNet Training Set
python Step2_UNet_trainset_generation.py --img_path ./image_root/Dataset
  1. Pretrain Unet
python Step3_Pretrain_Unet.py
  1. Joint Segmentation and Reconstruction
python Step4_UNet_MoFA_EM.py --img_path ./image_root/Dataset
  1. Test-time adaptation (Optional)

    To bridge the domain gap between training and testing data to reach higher performance on the test dataset, test-time adaptation is available with the following command:

python Step4_UNet_MoFA_EM.py --img_path ./image_root/Dataset_adapt --pretrained_model iteration_num

● Testing

To test the model saved as './MoFA_UNet_Save/model-path/model-name', use the command below:

python Demo.py --img_path ./image_root/Dataset --pretrained_model_test ./MoFA_UNet_Save/model-path/model-name.model --test_mode pipeline_name --test_path test_dataset_root --save_path save_path --landmark_list_name landmark_filename_optional.csv

Docker Image

● Differences

  1. .csv files are no longer required in the docker version. Instead, the landmarks are automatically detected.
  2. Fixed the naming of some variables.
  3. Misfit prior is also included in the docker image.

● Getting started

  1. Pull.
sudo docker pull chunluli/focus:1.2
  1. Run a container with your data directory /DataDir mounted.
docker run -v /DataDir:/FOCUS/data -itd chunluli/focus:1.2 /bin/bash 

docker attach containerID
  1. Run the following command to see how to use the codes:
python show_instructions.py

More information can be found in dockerhub.

Citation

Please cite the following papers if this model helps your research:

@inproceedings{li2023robust,
title={Robust Model-based Face Reconstruction through Weakly-Supervised Outlier Segmentation},
author={Li, Chunlu and Morel-Forster, Andreas and Vetter, Thomas and Egger, Bernhard and Kortylewski, Adam},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={372--381},
year={2023}
}

This code is built on top of the MoFA re-implementation from Tatsuro Koizumi and the data processing is on top of the Deep3D. If you establish your own work based on our work, please also cite the following papers:

@inproceedings{koizumi2020look,
  title={β€œLook Ma, no landmarks!”--Unsupervised, model-based dense face alignment},
  author={Koizumi, Tatsuro and Smith, William AP},
  booktitle={European Conference on Computer Vision},
  pages={690--706},
  year={2020},
  organization={Springer}
}

@inproceedings{deng2019accurate,
title={Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set},
author={Yu Deng and Jiaolong Yang and Sicheng Xu and Dong Chen and Yunde Jia and Xin Tong},
booktitle={IEEE Computer Vision and Pattern Recognition Workshops},
year={2019}
}

More Repositories

1

scalismo

Scalable Image Analysis and Shape Modelling
Scala
246
star
2

parametric-face-image-generator

Generate fully parametric face images from the Basel Face Model 2017
Scala
238
star
3

basel-face-pipeline

Scala
229
star
4

basel-face-model-viewer

Simple Viewer for the Basel Face Model 2017
Scala
174
star
5

scalismo-faces

Scalable Image Analysis and Shape Modelling: Module to work with 2d images, with a focus on face images
Scala
118
star
6

icp-proposal

Source code to the paper "A Closest Point Proposal for MCMC-based Probabilistic Surface Registration"
Scala
31
star
7

scalismo-ui

Visualization for Statistical Shape Models and Medical Images based on Scalismo.
Scala
22
star
8

GiNGR

Generalized iterative Non-Rigid Point Cloud and Surface Registration using Gaussian Process Regression
Scala
17
star
9

landmarks-clicker

A landmark clicker for 2d images. The clicked landmarks are in the format of scalismo-faces
Scala
11
star
10

probabilistic-fitting-ASM

Project to the publication "Probabilistic Fitting of Active Shape Models"
Scala
7
star
11

VSDClient

a library providing a REST client to the Virtual Skeleton Database (https://www.virtualskeleton.ch/)
Scala
5
star
12

scalismo-vtk

This library enables Conversion between Scalismo and VTK and adds VTK based IO capabilities
Scala
2
star
13

pmm2018-faces

Summerschool on probabilistic morphable models. Template for face projects
Scala
2
star
14

sequential-gpmm

Code for the paper "Sequential Gaussian Process Regression for Simultaneous Pathology Detection and Shape Reconstruction"
Scala
2
star
15

scalismo-microsite

microsite for the scalismo project
Scala
2
star
16

scalismo-asm

Active Shape Model implementation in Scalismo
Scala
2
star
17

scalismo-niftijio

packages the niftijio libraries for use in scalismo
Scala
1
star
18

scalismo-hdf5-json

Library to read and write files in hdf5-json format
Scala
1
star
19

posterior-shape-models-revisited

Source code and experiments accompanying the paper posterior-shape-models-revisited
Scala
1
star
20

bnn-informed-face-sampler

This is the repository to the paper "Informed MCMC with Bayesian Neural Networks for Facial Image Analysis".
1
star
21

ssia-march

Scalismo tutorial for workshop on Statistical and Shape-Based Image Analysis With Applications in Medicine
Scala
1
star
22

scalismo-native

Native libraries packaging for Scalismo
Java
1
star
23

scalismo-seed.g8

A minimal seed template for a Scalismo build
Scala
1
star
24

scalismo-faces-tutorial

A tutorial to get started with the Scalismo-Faces library
Scala
1
star
25

scalismo-tutorial

Tutorial for Scalismo and Scalismo-ui
Batchfile
1
star
26

shape-priors-from-pieces

Code repository for our MICCAI2020 ShapeMI workshop paper: "Learning Shape Priors from Pieces"
Scala
1
star