• Stars
    star
    135
  • Rank 269,297 (Top 6 %)
  • Language
    Python
  • License
    Other
  • Created over 2 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Towards Racially Unbiased Skin Tone Estimation via Scene Disambiguation (ECCV2022)

TRUST: Towards Racially Unbiased Skin Tone Estimation via Scene Disambiguation (ECCV2022)

This is the official Pytorch implementation of TRUST.

  • We identify, analyze and quantify the problem of biased facial albedo estimation.
  • We propose FAIR Challenge, a new synthetic benchmark including novel evaluation protocol that measures albedo estimation in terms of skin tone and diversity.
  • We propose TRUST, a new network that estimates facial albedo with more accuracy and less bias in skin tone, hence the reconstructed 3D head avatar can be faithful and inclusive from a single image.

Please refer to the arXiv paper for more details.

Getting Started

Clone the repo:

git clone https://github.com/HavenFeng/TRUST/
cd TRUST

Requirements

  • Python 3.8 (numpy, skimage, scipy, opencv)
  • PyTorch >= 1.7 (pytorch3d compatible)
    You can run
    pip install -r requirements.txt
    If you encountered errors when installing PyTorch3D, please follow the official installation guide to re-install the library.

Usage

  1. Prepare data & models

    Please check our project website to download the FAIR benchmark dataset and our released pretrained models.
    After downloading the pretrained models, put them in ./data

  2. Run test
    a. FAIR benchmark

    python test.py --test_folder '/path/to/trust_models' --test_split val

    change the test_split flag to run on test set or validation set.

Evaluation

TRUST (ours) achieves 57% lower error of the total score (35% lower on Average ITA error, 77% lower on Bias error), on the FAIR Challenge compared to the previous state-of-the-art method.

For more details of the evaluation, please check our arXiv paper.

Citation

If you find our work useful to your research, please consider citing:

@inproceedings{Feng:TRUST:ECCV2022,
  title = {Towards Racially Unbiased Skin Tone Estimation via Scene Disambiguation}, 
  author = {Feng, Haiwen and Bolkart, Timo and Tesch, Joachim and Black, Michael J. and Abrevaya, Victoria}, 
  booktitle = {European Conference on Computer Vision}, 
  year = {2022}
}

Notes

Training code will also be released in the future.

License

This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms in the LICENSE.

Acknowledgements

For functions or scripts that are based on external sources, we acknowledge the origin individually in each file.
Here are some great resources we benefit:

We would also like to thank other recent public 3D face reconstruction works that allow us to easily perform quantitative and qualitative comparisons :)
DECA, Deep3DFaceReconstruction, GANFit, INORig, MGCNet

This work was partly supported by the German Federal Ministry of Education and Research (BMBF): Tuebingen AI Center, FKZ: 01IS18039B