the real-time face detection Centerface
unofficial version of centerface, which achieves the best balance between speed and accuracy. Centerface is a practical anchor-free face detection and alignment method for edge devices.
The project provides training scripts, training data sets, and pre-training models to facilitate users to reproduce the results. Finally, thank the centerface's author for the training advice.
performance results on the validation set of WIDER_FACE
use the same train dataset without additional data
for multi-scale,set the scale with 0.8,1.0,1.2,1.4, but they also resize to 800*800, so i think it not the real multi-scale test.
Method | Easy | Medium | Hard |
---|---|---|---|
ours(one scale) | 0.9206 | 0.9089 | 0.7846 |
original | 0.922 | 0.911 | 0.782 |
ours(multi-scale) | 0.9306 | 0.9193 | 0.8008 |
Requirements
use pytorch, you can use pip or conda to install the requirements
# for pip
cd $project
pip install -r requirements.txt
# for conda
conda env create -f enviroment.yaml
Test
-
download the pretrained model from Baidu password: etdi
-
download the validation set of WIDER_FACE password: y4wg
-
test on the validation set
cd $project/src
source activate torch110
python test_wider_face.py
- calculate the accuracy
cd $project/evaluate
python3 setup.py build_ext --inplace
python evaluation.py --pred {the result folder}
>>>
Easy Val AP: 0.9257383419951156
Medium Val AP: 0.9131308732465665
Hard Val AP: 0.7717305552550734
-
face recognition video
video
Train
the backbone use mobilev2 as the same with the original paper
The annotation file is in coco format. the annotation file and train data can download for Baidu password: f9hh
train
cd $project/src/tools
source activate torch110
python main.py
the train tricks
Training directly with the current code will not achieve the precision of the paper (I have also tested various scenarios).
Here's how I train:
-
First train with the size of 640×640/514×514
-
Then fine tune with the size of 800×800
-
For the easy and hard part, s = s * np.random. Choice (np.arange(0.3, 1.2, 0.1)). The larger the value, the more small samples will be generated
or you can fine tuning on the pretrained model.
Train on your own data
follow the CenterNet
TO DO
- use more powerful and small backbone
- use other FPN tricks
reference
borrow code from CenterNet