Poly-Scale Convolution
Official implementation of our PSConv operator as described in PSConv: Squeezing Feature Pyramid into One Compact Poly-Scale Convolutional Layer (ECCV'20) by Duo Li, Anbang Yao and Qifeng Chen on the MS COCO 2017 benchmark.
We collect multi-scale feature representations in a finer granularity, by tactfully allocating a spectrum of dilation rates in the kernel lattice.
Getting Started
Installation
Following the instructions from INSTALL.md for installation. More detailed guidance can be found from MMDetection.
Download ImageNet pre-trained checkpoints
Fetch pre-trained weights of PS-ResNet-50, PS-ResNet-101 and PS-ResNeXt-101 (32x4d) backbones and put them to your local path. Set the pretrained
path of config file and launch the training of detectors.
Training
The default learning rate in config files is for 8 GPUs and 2 img/GPU (batch size = 8*2 = 16). According to the Linear Scaling Rule, you need to set the learning rate proportional to the batch size if you use different GPUs or images per GPU, e.g., lr=0.01 for 4 GPUs * 2 img/gpu and lr=0.08 for 16 GPUs * 4 img/GPU.
# single-gpu training
python tools/train.py ${CONFIG_FILE}
# multi-gpu training
./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]
Optional arguments are:
--validate
(strongly recommended): Perform evaluation at every k (default value is 1) epochs during the training.--work_dir ${WORK_DIR}
: Override the working directory specified in the config file.--resume_from ${CHECKPOINT_FILE}
: Resume from a previous checkpoint file.
Test
# single-gpu testing
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}] [--show]
# multi-gpu testing
./tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}]
Optional arguments:
RESULT_FILE
: Filename of the output results in pickle format. If not specified, the results will not be saved to a file.EVAL_METRICS
: Items to be evaluated on the results. Allowed values are:proposal_fast
,proposal
,bbox
,segm
,keypoints
.--show
: If specified, detection results will be ploted on the images and shown in a new window. (Only applicable for single GPU testing.)
Model Zoo
Faster R-CNN
Backbone | Style | Lr schd | box AP | Download |
---|---|---|---|---|
R-50-FPN | pytorch | 1x | 38.4 | model | log |
R-101-FPN | pytorch | 1x | 40.9 | model | log |
X-101-FPN | pytorch | 1x | 41.3 | model | log |
Mask R-CNN
Backbone | Style | Lr schd | box AP | mask AP | Download |
---|---|---|---|---|---|
R-50-FPN | pytorch | 1x | 39.4 | 35.6 | model | log |
R-101-FPN | pytorch | 1x | 41.6 | 37.4 | model | log |
X-101-FPN | pytorch | 1x | 42.4 | 38.0 | model | log |
Cascade R-CNN
Backbone | Style | Lr schd | box AP | Download |
---|---|---|---|---|
R-50-FPN | pytorch | 1x | 41.9 | model | log |
R-101-FPN | pytorch | 1x | 43.8 | model | log |
X-101-FPN | pytorch | 1x | 44.4 | model | log |
Cascade Mask R-CNN
Backbone | Style | Lr schd | box AP | mask AP | Download |
---|---|---|---|---|---|
R-50-FPN | pytorch | 1x | 42.9 | 36.9 | model | log |
R-101-FPN | pytorch | 1x | 44.6 | 38.4 | model | log |
X-101-FPN | pytorch | 1x | 45.3 | 38.9 | model | log |
Acknowledgement
This implementation is built upon MMDetection. Thanks Kai Chen for releasing this awesome toolbox and his helpful discussions.
Since this project is finished nearly one year ago, we adapt our code from an early commit 713e98b.
Citation
If you find our work useful in your research, please consider citing:
@InProceedings{Li_2020_ECCV,
author = {Li, Duo and Yao, Anbang and Chen, Qifeng},
title = {PSConv: Squeezing Feature Pyramid into One Compact Poly-Scale Convolutional Layer},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {August},
year = {2020}
}