SSD Pytorch
A PyTorch implementation of SSDs (include original ssd, DRFNet, RefineDet)
Table of Contents
   Â
Installation
- Install PyTorch-0.4.0 by selecting your environment on the website and running the appropriate command.
- Clone this repository.
- Note: We currently only support Python 3+.
- Then download the dataset by following the instructions below.
- Compile the nms and install coco tools:
cd SSD_Pytorch
# if you use anaconda3, maybe you need https://github.com/rbgirshick/py-faster-rcnn/issues/706
./make.sh
pip install pycocotools
Note*: Check you GPU architecture support in utils/build.py, line 131. Default is:
'nvcc': ['-arch=sm_52',
Datasets
To make things easy, we provide a simple VOC dataset loader that inherits torch.utils.data.Dataset
making it fully compatible with the torchvision.datasets
API.
VOC Dataset
Download VOC2007 trainval & test
# specify a directory for dataset to be downloaded into, else default is ~/data/
sh data/scripts/VOC2007.sh # <directory>
Download VOC2012 trainval
# specify a directory for dataset to be downloaded into, else default is ~/data/
sh data/scripts/VOC2012.sh # <directory>
Merge VOC2007 and VOC2012
move all images in VOC2007 and VOC2012 into VOCROOT/VOC0712/JPEGImages
move all annotations in VOC2007 and VOC2012 into VOCROOT/VOC0712/JPEGImages/Annotations
rename and merge some txt VOC2007 and VOC2012 ImageSets/Main/*.txt to VOCROOT/VOC0712/JPEGImages/ImageSets/Main/*.txt
the merged txt list as follows:
2012_test.txt, 2007_test.txt, 0712_trainval_test.txt, 2012_trainval.txt, 0712_trainval.txt
COCO Dataset
Install the MS COCO dataset at /path/to/coco from official website, default is ~/data/COCO. Following the instructions to prepare minival2014 and valminusminival2014 annotations. All label files (.json) should be under the COCO/annotations/ folder. It should have this basic structure
$COCO/
$COCO/cache/
$COCO/annotations/
$COCO/images/
$COCO/images/test2015/
$COCO/images/train2014/
$COCO/images/val2014/
UPDATE: The current COCO dataset has released new train2017 and val2017 sets which are just new splits of the same image sets.
Training
- First download the fc-reduced VGG-16 PyTorch base network weights at: https://s3.amazonaws.com/amdegroot-models/vgg16_reducedfc.pth
- ResNet pre-trained basenet weight file is available at ResNet50, ResNet101, ResNet152
- By default, we assume you have downloaded the file in the
SSD_Pytorch/weights/pretrained_models
dir:
mkdir weights
cd weights
mkdir pretrained_models
wget https://s3.amazonaws.com/amdegroot-models/vgg16_reducedfc.pth
wget https://download.pytorch.org/models/resnet50-19c8e357.pth
wget https://download.pytorch.org/models/resnet101-5d3b4d8f.pth
wget https://download.pytorch.org/models/resnet152-b121ed2d.pth
mv download_weights pretrained_models
- To train SSD_Pytorch using the train script simply specify the parameters listed in
train.py
as a flag or manually change them.
python train.py --cfg ./configs/ssd_vgg_voc.yaml
-
Note: All training configs are in ssd_vgg_voc.yaml, you can change it by yourself.
-
To evaluate a trained network:
python eval.py --cfg ./configs/ssd_vgg_voc.yaml --weights ./eval_weights
- To detect one images
# you need put some images in ./images
python demo.py --cfg ./configs/ssd_vgg_voc.yaml --images ./images --save_folder ./output
You can specify the parameters listed in the eval.py
or demo.py
file by flagging them or manually changing them.
Performance
VOC2007 Test
mAP
we retrained some models, so it's different from the origin paper size = 300
ssd_vgg | ssd_res | ssd_darknet | drf_ssd_vgg | drf_ssd_res | refine_drf_vgg | refine_ssd_vgg |
---|---|---|---|---|---|---|
77.5% | 77.0 | 77.6% | 79.6 % | 79.0% | 80.2% | 80.4 % |
References
- Wei Liu, et al. "SSD: Single Shot MultiBox Detector." ECCV2016.
- Original Implementation (CAFFE)
- A list of other great SSD ports that were sources of inspiration (especially the Chainer repo):