SparseNet
Sparsely Aggregated Convolutional Networks [PDF]
Ligeng Zhu, Ruizhi Deng, Michael Maire, Zhiwei Deng, Greg Mori, Ping Tan
What is SparseNet?
SparseNet is a network architecture that only aggregates previous layers with exponential offset, for example, i - 1, i - 2, i - 4, i - 8, i - 16 ...
Why use SparseNet?
The connectivity pattern yields state-of-the-art arruacies on small dataset CIFAR/10/100. On large scale ILSVRC 2012 (ImageNet) dataset, SparseNet achieves similar accuracy as ResNet and DenseNet, while only using much less parameters.
Better Performance
Without BC | With BC | ||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
Efficient Parameter Utilization
-
Paramter efficiency on ImageNet
We notice sparsenet shows comparable efficiency even compared with pruned models.
Pretrained model
Refer for source folder.
Cite
If SparseNet helps your research, please cite our work :)
@article{DBLP:journals/corr/abs-1801-05895,
author = {Ligeng Zhu and
Ruizhi Deng and
Michael Maire and
Zhiwei Deng and
Greg Mori and
Ping Tan},
title = {Sparsely Aggregated Convolutional Networks},
journal = {CoRR},
volume = {abs/1801.05895},
year = {2018},
url = {http://arxiv.org/abs/1801.05895},
archivePrefix = {arXiv},
eprint = {1801.05895},
biburl = {https://dblp.org/rec/bib/journals/corr/abs-1801-05895},
bibsource = {dblp computer science bibliography, https://dblp.org}
}