• Stars
    star
    1,434
  • Rank 32,829 (Top 0.7 %)
  • Language
    C++
  • License
    Other
  • Created about 7 years ago
  • Updated almost 6 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Single-Shot Refinement Neural Network for Object Detection, CVPR, 2018

Single-Shot Refinement Neural Network for Object Detection

License

By Shifeng Zhang, Longyin Wen, Xiao Bian, Zhen Lei, Stan Z. Li.

Introduction

We propose a novel single-shot based detector, called RefineDet, that achieves better accuracy than two-stage methods and maintains comparable efficiency of one-stage methods. You can use the code to train/evaluate the RefineDet method for object detection. For more details, please refer to our paper.

RefineDet Structure

System VOC2007 test mAP FPS (Titan X) Number of Boxes Input resolution
Faster R-CNN (VGG16) 73.2 7 ~6000 ~1000 x 600
YOLO (GoogLeNet) 63.4 45 98 448 x 448
YOLOv2 (Darknet-19) 78.6 40 1445 544 x 544
SSD300* (VGG16) 77.2 46 8732 300 x 300
SSD512* (VGG16) 79.8 19 24564 512 x 512
RefineDet320 (VGG16) 80.0 40 6375 320 x 320
RefineDet512 (VGG16) 81.8 24 16320 512 x 512

RefineDet results on multiple datasets

Note: RefineDet300+ and RefineDet512+ are evaluated with the multi-scale testing strategy. The code of the multi-scale testing has also been released in this repository.

Citing RefineDet

Please cite our paper in your publications if it helps your research:

@inproceedings{zhang2018single,
  title = {Single-Shot Refinement Neural Network for Object Detection},
  author = {Zhang, Shifeng and Wen, Longyin and Bian, Xiao and Lei, Zhen and Li, Stan Z.},
  booktitle = {CVPR},
  year = {2018}
}

Contents

  1. Installation
  2. Preparation
  3. Training
  4. Evaluation
  5. Models

Installation

  1. Get the code. We will call the cloned directory as $RefineDet_ROOT.
git clone https://github.com/sfzhang15/RefineDet.git
  1. Build the code. Please follow Caffe instruction to install all necessary packages and build it.
cd $RefineDet_ROOT
# Modify Makefile.config according to your Caffe installation.
# Make sure to include $RefineDet_ROOT/python to your PYTHONPATH.
cp Makefile.config.example Makefile.config
make all -j && make py

Preparation

  1. Download fully convolutional reduced (atrous) VGGNet. By default, we assume the model is stored in $RefineDet_ROOT/models/VGGNet/.

  2. Download ResNet-101. By default, we assume the model is stored in $RefineDet_ROOT/models/ResNet/.

  3. Follow the data/VOC0712/README.md to download VOC2007 and VOC2012 dataset and create the LMDB file for the VOC2007 training and testing.

  4. Follow the data/VOC0712Plus/README.md to download VOC2007 and VOC2012 dataset and create the LMDB file for the VOC2012 training and testing.

  5. Follow the data/coco/README.md to download MS COCO dataset and create the LMDB file for the COCO training and testing.

Training

  1. Train your model on PASCAL VOC.
# It will create model definition files and save snapshot models in:
#   - $RefineDet_ROOT/models/VGGNet/VOC0712{Plus}/refinedet_vgg16_{size}x{size}/
# and job file, log file, and the python script in:
#   - $RefineDet_ROOT/jobs/VGGNet/VOC0712{Plus}/refinedet_vgg16_{size}x{size}/
python examples/refinedet/VGG16_VOC2007_320.py
python examples/refinedet/VGG16_VOC2007_512.py
python examples/refinedet/VGG16_VOC2012_320.py
python examples/refinedet/VGG16_VOC2012_512.py
  1. Train your model on COCO.
# It will create model definition files and save snapshot models in:
#   - $RefineDet_ROOT/models/{Network}/coco/refinedet_{network}_{size}x{size}/
# and job file, log file, and the python script in:
#   - $RefineDet_ROOT/jobs/{Network}/coco/refinedet_{network}_{size}x{size}/
python examples/refinedet/VGG16_COCO_320.py
python examples/refinedet/VGG16_COCO_512.py
python examples/refinedet/ResNet101_COCO_320.py
python examples/refinedet/ResNet101_COCO_512.py
  1. Train your model form COOC to VOC (Based on VGG16).
# It will extract a VOC model from a pretrained COCO model.
ipython notebook convert_model_320.ipynb
ipython notebook convert_model_512.ipynb
# It will create model definition files and save snapshot models in:
#   - $RefineDet_ROOT/models/VGGNet/VOC0712{Plus}/refinedet_vgg16_{size}x{size}_ft/
# and job file, log file, and the python script in:
#   - $RefineDet_ROOT/jobs/VGGNet/VOC0712{Plus}/refinedet_vgg16_{size}x{size}_ft/
python examples/refinedet/finetune_VGG16_VOC2007_320.py
python examples/refinedet/finetune_VGG16_VOC2007_512.py
python examples/refinedet/finetune_VGG16_VOC2012_320.py
python examples/refinedet/finetune_VGG16_VOC2012_512.py

Evaluation

  1. Build the Cython modules.
cd $RefineDet_ROOT/test/lib
make -j
  1. Change the ‘self._devkit_path’ in test/lib/datasets/pascal_voc.py to yours.

  2. Change the ‘self._data_path’ in test/lib/datasets/coco.py to yours.

  3. Check out test/refinedet_demo.py on how to detect objects using the RefineDet model and how to plot detection results.

# For GPU users
python test/refinedet_demo.py
# For CPU users
python test/refinedet_demo.py --gpu_id -1
  1. Evaluate the trained models via test/refinedet_test.py.
# You can modify the parameters in refinedet_test.py for different types of evaluation:
#  - single_scale: True is single scale testing, False is multi_scale_testing.
#  - test_set: 'voc_2007_test', 'voc_2012_test', 'coco_2014_minival', 'coco_2015_test-dev'.
#  - voc_path: where the trained voc caffemodel.
#  - coco_path: where the trained voc caffemodel.
# For 'voc_2007_test' and 'coco_2014_minival', it will directly output the mAP results.
# For 'voc_2012_test' and 'coco_2015_test-dev', it will save the detections and you should submitted it to the evaluation server to get the mAP results.
python test/refinedet_test.py

Models

We have provided the models that are trained from different datasets. To help reproduce the results in Table 1, Table 2, Table 4, most models contain a pretrained .caffemodel file, many .prototxt files, and python scripts.

  1. PASCAL VOC models (VGG-16):

  2. COCO models:

Note: If you can not download pre-trained models through the above links, you can download them through BaiduYun.