Joint Voxel and Coordinate Regression (JVCR) for 3D Facial Landmark Localization
This repository includes the PyTorch code of the JVCR method described in Adversarial Learning Semantic Volume for 2D/3D Face Shape Regression in the Wild (IEEE Transactions on Image Processing, 2019).
Requirements
- python 2.7
packages
Usage
Clone the repository and install the dependencies mentioned above
git clone https://github.com/HongwenZhang/JVCR-3Dlandmark.git
cd JVCR-3Dlandmark
Then, you can run the demo code or train a model from stratch.
Demo
-
Download the pre-trained model (trained on 300W-LP) and put it into the
checkpoint
directory -
Run the demo code
python run_demo.py --verbose
Training
- Prepare the training and evaluation datasets
- Download 300W-LP and AFLW3000-3D
- Create soft links to the dataset directories
ln -s /path/to/your/300W_LP data/300wLP/images
ln -s /path/to/your/aflw2000 data/aflw2000/images
- Download
.json
annotation files from here and put them intodata/300wLP
anddata/aflw2000
respectively
- Run the training code
python train.py --gpus 0 -j 4
Acknowledgment
The code is developed upon PyTorch-Pose. Thanks to the original author.
Citation
If the code is helpful in your research, please cite the following paper.
@article{zhang2019adversarial,
title={Adversarial Learning Semantic Volume for 2D/3D Face Shape Regression in the Wild},
author={Zhang, Hongwen and Li, Qi and Sun, Zhenan},
journal={IEEE Transactions on Image Processing},
volume={28},
number={9},
pages={4526--4540},
year={2019},
publisher={IEEE}
}