Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation - Official PyTorch Implementation
Evaluation code for Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation. Finally ported to PyTorch!
Recent Updates
2020.10.27
: Added STL support
2020.05.07
: Added a wheel package!
2020.05.06
: Added myBinder version for quick testing of the model
2020.04.30
: Initial pyTorch release
What's in this release?
The original pix2vertex repo was composed of three parts
- A network to perform the image to depth + correspondence maps trained on synthetic facial data
- A non-rigid ICP scheme for converting the output maps to a full 3D Mesh
- A shape-from-shading scheme for adding fine mesoscopic details
This repo currently contains our image-to-image network with weights and model to PyTorch
and a simple python
postprocessing scheme.
- The released network was trained on a combination of synthetic images and unlabeled real images for some extra robustness :)
Installation
Installation from PyPi
$ pip install pix2vertex
Installation from source
$ git clone https://github.com/eladrich/pix2vertex.pytorch.git
$ cd pix2vertex.pytorch
$ python setup.py install
Usage
The quickest way to try p2v
is using the reconstruct
method over an input image, followed by visualization or STL creation.
import pix2vertex as p2v
from imageio import imread
image = imread(<some image file>)
result, crop = p2v.reconstruct(image)
# Interactive visualization in a notebook
p2v.vis_depth_interactive(result['Z_surface'])
# Static visualization using matplotlib
p2v.vis_depth_matplotlib(crop, result['Z_surface'])
# Export to STL
p2v.save2stl(result['Z_surface'], 'res.stl')
For a more complete example see the reconstruct_pipeline
notebook. You can give it a try without any installations using our binder port.
Pretrained Model
Models can be downloaded from these links:
- pix2vertex model
- dlib landmark predictor - note that the dlib model has its own license.
If no model path is specified the package automagically downloads the required models.
TODOs
- Port Torch model to PyTorch
- Release an inference notebook (using K3D)
- Add requirements
- Pack as wheel
- Ported to MyBinder
- Add a simple method to export a stl file for printing
- Port the Shape-from-Shading method used in our matlab paper
- Write a short blog about the revised training scheme
Citation
If you use this code for your research, please cite our paper Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation:
@article{sela2017unrestricted,
title={Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation},
author={Sela, Matan and Richardson, Elad and Kimmel, Ron},
journal={arxiv},
year={2017}
}