• Stars
    star
    505
  • Rank 87,373 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created over 7 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A deep neural network for face alignment

Deep Alignment Network

This is a reference implementation of the face alignment method described in "Deep Alignment Network: A convolutional neural network for robust face alignment" which has been accepted to the First Faces in-the-wild Workshop-Challenge at CVPR 2017. You can read the entire paper on Arxiv here. You can download the presentation and poster from Dropbox here or Google drive here.

Getting started

First of all you need to make sure you have installed Python 2.7. For that purpose we recommend Anaconda, it has all the necessary libraries except:

  • Theano 0.9.0
  • Lasagne 0.2
  • OpenCV 3.1.0 or newer

OpenCV can be downloaded from Christoph Gohlke's website. Theano and Lasagne can be installed with the following commands:

  pip install Theano==0.9.0
  pip install https://github.com/Lasagne/Lasagne/archive/master.zip

Once you have installed Python and the dependencies download at least one of the two pre-trained models available on Dropbox here or Google drive here.

The easiest way to see our method in action is to run the CameraDemo.py script which performs face tracking on a local webcam.

Running the experiments from the article

Before continuing download the model files as described above.

Comparison with state-of-the-art

Download the 300W, LFPW, HELEN, AFW and IBUG datasets from https://ibug.doc.ic.ac.uk/resources/facial-point-annotations/ and extract them to /data/images/ into separate directories: 300W, lfpw, helen, afw and ibug. Run the TestSetPreparation.py script, it may take a while.

Use the DANtesting.py script to perform the experiments. It will calculate the average error for all of the test subsets as well as the [email protected] score and failure rate for the 300W public and private test sets.

The parameters you can set in the script are as follows:

  • verbose: if True the script will display the error for each image,
  • showResults: if True it will show the localized landmarks for each image,
  • showCED: if True the Cumulative Error Distribution curve will be shown along with the AUC score,
  • normalization: 'centers' for inter-pupil distance, 'corners' for inter-ocular distance, 'diagonal' for bounding box diagonal normalization.
  • failureThreshold: the error threshold over which the results are considered to be failures, for inter-ocular distance it should be set to 0.08,
  • networkFilename: either '../DAN.npz' or '../DAN-Menpo.npz'.

Results on the Menpo test set

Download the Menpo test set from https://ibug.doc.ic.ac.uk/resources/ and extract it. Open the MenpoEval.py script and make sure that MenpoDir is set to the directory with images that you just extracted. Run the scripts to process the dataset. The results will be saved as images and pts files in the directories indicated in the imgOutputDir and ptsOutputDir variables.

TensorFlow implementation

Two TensorFlow implementations of Deep Alignment Network have been published by other GitHub users:

Citation

If you use this software in your research, then please cite the following paper:

Kowalski, M.; Naruniec, J.; Trzcinski, T.: "Deep Alignment Network: A convolutional neural network for robust face alignment", CVPRW 2017

License

While the code is licensed under the MIT license, which allows for commercial use, keep in mind that the models linked above were trained on the 300-W dataset, which allows for research use only. For details please see: https://ibug.doc.ic.ac.uk/resources/facial-point-annotations/

Contact

If you have any questions or suggestions feel free to contact me at [email protected].