• Stars
    star
    2,732
  • Rank 16,672 (Top 0.4 %)
  • Language
  • Created over 12 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A curated list of deep learning image classification papers and codes

Awesome - Image Classification

Awesome

A curated list of deep learning image classification papers and codes since 2014, Inspired by awesome-object-detection, deep_learning_object_detection and awesome-deep-learning-papers.

Background

I believe image classification is a great start point before diving into other computer vision fields, espacially for begginers who know nothing about deep learning. When I started to learn computer vision, I've made a lot of mistakes, I wish someone could have told me that which paper I should start with back then. There doesn't seem to have a repository to have a list of image classification papers like deep_learning_object_detection until now. Therefore, I decided to make a repository of a list of deep learning image classification papers and codes to help others. My personal advice for people who know nothing about deep learning, try to start with vgg, then googlenet, resnet, feel free to continue reading other listed papers or switch to other fields after you are finished.

Note: I also have a repository of pytorch implementation of some of the image classification networks, you can check out here.

Performance Table

For simplicity reason, I only listed the best top1 and top5 accuracy on ImageNet from the papers. Note that this does not necessarily mean one network is better than another when the acc is higher, cause some networks are focused on reducing the model complexity instead of improving accuracy, or some papers only give the single crop results on ImageNet, but others give the model fusion or multicrop results.

  • ConvNet: name of the covolution network
  • ImageNet top1 acc: best top1 accuracy on ImageNet from the Paper
  • ImageNet top5 acc: best top5 accuracy on ImageNet from the Paper
  • Published In: which conference or journal the paper was published in.
ConvNet ImageNet top1 acc ImageNet top5 acc Published In
Vgg 76.3 93.2 ICLR2015
GoogleNet - 93.33 CVPR2015
PReLU-nets - 95.06 ICCV2015
ResNet - 96.43 CVPR2015
PreActResNet 79.9 95.2 CVPR2016
Inceptionv3 82.8 96.42 CVPR2016
Inceptionv4 82.3 96.2 AAAI2016
Inception-ResNet-v2 82.4 96.3 AAAI2016
Inceptionv4 + Inception-ResNet-v2 83.5 96.92 AAAI2016
RiR - - ICLR Workshop2016
Stochastic Depth ResNet 78.02 - ECCV2016
WRN 78.1 94.21 BMVC2016
SqueezeNet 60.4 82.5 arXiv2017(rejected by ICLR2017)
GeNet 72.13 90.26 ICCV2017
MetaQNN - - ICLR2017
PyramidNet 80.8 95.3 CVPR2017
DenseNet 79.2 94.71 ECCV2017
FractalNet 75.8 92.61 ICLR2017
ResNext - 96.97 CVPR2017
IGCV1 73.05 91.08 ICCV2017
Residual Attention Network 80.5 95.2 CVPR2017
Xception 79 94.5 CVPR2017
MobileNet 70.6 - arXiv2017
PolyNet 82.64 96.55 CVPR2017
DPN 79 94.5 NIPS2017
Block-QNN 77.4 93.54 CVPR2018
CRU-Net 79.7 94.7 IJCAI2018
DLA 75.3 - CVPR2018
ShuffleNet 75.3 - CVPR2018
CondenseNet 73.8 91.7 CVPR2018
NasNet 82.7 96.2 CVPR2018
MobileNetV2 74.7 - CVPR2018
IGCV2 70.07 - CVPR2018
hier 79.7 94.8 ICLR2018
PNasNet 82.9 96.2 ECCV2018
AmoebaNet 83.9 96.6 AAAI2018
SENet - 97.749 CVPR2018
ShuffleNetV2 81.44 - ECCV2018
CBAM 79.93 94.41 ECCV2018
IGCV3 72.2 - BMVC2018
BAM 77.56 93.71 BMVC2018
MnasNet 76.13 92.85 CVPR2018
SKNet 80.60 - CVPR2019
DARTS 73.3 91.3 ICLR2019
ProxylessNAS 75.1 92.5 ICLR2019
MobileNetV3 75.2 - CVPR2019
Res2Net 79.2 94.37 PAMI2019
LIP-ResNet 79.33 94.6 ICCV2019
EfficientNet 84.3 97.0 ICML2019
FixResNeXt 86.4 98.0 NIPS2019
BiT 87.5 - ECCV2020
PSConv + ResNext101 80.502 95.276 ECCV2020
NoisyStudent 88.4 98.7 CVPR2020
RegNet 79.9 - CVPR2020
GhostNet 75.7 - CVPR2020
ViT 88.55 - ICLR2021
DeiT 85.2 - ICML2021
PVT 81.7 - ICCV2021
T2T-Vit 83.3 - ICCV2021
DeepVit 80.9 - Arvix2021
ViL 83.7 - ICCV2021
TNT 83.9 - Arvix2021
CvT 87.7 - ICCV2021
CViT 84.1 - ICCV2021
Focal-T 84.0 - NIPS2021
Twins 83.7 - NIPS2021
PVTv2 81.7 - CVM2022

Papers&Codes

VGG

Very Deep Convolutional Networks for Large-Scale Image Recognition. Karen Simonyan, Andrew Zisserman

GoogleNet

Going Deeper with Convolutions Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich

PReLU-nets

Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun

ResNet

Deep Residual Learning for Image Recognition Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun

PreActResNet

Identity Mappings in Deep Residual Networks Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun

Inceptionv3

Rethinking the Inception Architecture for Computer Vision Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna

Inceptionv4 && Inception-ResNetv2

Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi

RiR

Resnet in Resnet: Generalizing Residual Architectures Sasha Targ, Diogo Almeida, Kevin Lyman

Stochastic Depth ResNet

Deep Networks with Stochastic Depth Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, Kilian Weinberger

WRN

Wide Residual Networks Sergey Zagoruyko, Nikos Komodakis

SqueezeNet

SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer

GeNet

Genetic CNN Lingxi Xie, Alan Yuille

MetaQNN

Designing Neural Network Architectures using Reinforcement Learning Bowen Baker, Otkrist Gupta, Nikhil Naik, Ramesh Raskar

PyramidNet

Deep Pyramidal Residual Networks Dongyoon Han, Jiwhan Kim, Junmo Kim

DenseNet

Densely Connected Convolutional Networks Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger

FractalNet

FractalNet: Ultra-Deep Neural Networks without Residuals Gustav Larsson, Michael Maire, Gregory Shakhnarovich

ResNext

Aggregated Residual Transformations for Deep Neural Networks Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, Kaiming He

IGCV1

Interleaved Group Convolutions for Deep Neural Networks Ting Zhang, Guo-Jun Qi, Bin Xiao, Jingdong Wang

Residual Attention Network

Residual Attention Network for Image Classification Fei Wang, Mengqing Jiang, Chen Qian, Shuo Yang, Cheng Li, Honggang Zhang, Xiaogang Wang, Xiaoou Tang

Xception

Xception: Deep Learning with Depthwise Separable Convolutions François Chollet

MobileNet

MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam

PolyNet

PolyNet: A Pursuit of Structural Diversity in Very Deep Networks Xingcheng Zhang, Zhizhong Li, Chen Change Loy, Dahua Lin

DPN

Dual Path Networks Yunpeng Chen, Jianan Li, Huaxin Xiao, Xiaojie Jin, Shuicheng Yan, Jiashi Feng

Block-QNN

Practical Block-wise Neural Network Architecture Generation Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu

CRU-Net

Sharing Residual Units Through Collective Tensor Factorization in Deep Neural Networks Chen Yunpeng, Jin Xiaojie, Kang Bingyi, Feng Jiashi, Yan Shuicheng

DLA

Deep Layer Aggregation Fisher Yu, Dequan Wang, Evan Shelhamer, Trevor Darrell

ShuffleNet

ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun

CondenseNet

CondenseNet: An Efficient DenseNet using Learned Group Convolutions Gao Huang, Shichen Liu, Laurens van der Maaten, Kilian Q. Weinberger

NasNet

Learning Transferable Architectures for Scalable Image Recognition Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le

MobileNetV2

MobileNetV2: Inverted Residuals and Linear Bottlenecks Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen

IGCV2

IGCV2: Interleaved Structured Sparse Convolutional Neural Networks Guotian Xie, Jingdong Wang, Ting Zhang, Jianhuang Lai, Richang Hong, Guo-Jun Qi

hier

Hierarchical Representations for Efficient Architecture Search Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, Koray Kavukcuoglu

PNasNet

Progressive Neural Architecture Search Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, Kevin Murphy

AmoebaNet

Regularized Evolution for Image Classifier Architecture Search Esteban Real, Alok Aggarwal, Yanping Huang, Quoc V Le

SENet

Squeeze-and-Excitation Networks Jie Hu, Li Shen, Samuel Albanie, Gang Sun, Enhua Wu

ShuffleNetV2

ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, Jian Sun

CBAM

CBAM: Convolutional Block Attention Module Sanghyun Woo, Jongchan Park, Joon-Young Lee, In So Kweon

IGCV3

IGCV3: Interleaved Low-Rank Group Convolutions for Efficient Deep Neural Networks Ke Sun, Mingjie Li, Dong Liu, Jingdong Wang

BAM

BAM: Bottleneck Attention Module Jongchan Park, Sanghyun Woo, Joon-Young Lee, In So Kweon

MNasNet

MnasNet: Platform-Aware Neural Architecture Search for Mobile Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Quoc V. Le

SKNet

Selective Kernel Networks Xiang Li, Wenhai Wang, Xiaolin Hu, Jian Yang

DARTS

DARTS: Differentiable Architecture Search Hanxiao Liu, Karen Simonyan, Yiming Yang

ProxylessNAS

ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware Han Cai, Ligeng Zhu, Song Han

MobileNetV3

Searching for MobileNetV3 Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, Quoc V. Le, Hartwig Adam

Res2Net

Res2Net: A New Multi-scale Backbone Architecture Shang-Hua Gao, Ming-Ming Cheng, Kai Zhao, Xin-Yu Zhang, Ming-Hsuan Yang, Philip Torr

LIP-ResNet

LIP: Local Importance-based Pooling Ziteng Gao, Limin Wang, Gangshan Wu

EfficientNet

EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks Mingxing Tan, Quoc V. Le

FixResNeXt

Fixing the train-test resolution discrepancy Hugo Touvron, Andrea Vedaldi, Matthijs Douze, Hervé Jégou

BiT

Big Transfer (BiT): General Visual Representation Learning Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby

PSConv + ResNext101

PSConv: Squeezing Feature Pyramid into One Compact Poly-Scale Convolutional Layer Duo Li1, Anbang Yao2B, and Qifeng Chen1B

NoisyStudent

Self-training with Noisy Student improves ImageNet classification Qizhe Xie, Minh-Thang Luong, Eduard Hovy, Quoc V. Le

RegNet

Designing Network Design Spaces Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár

GhostNet

GhostNet: More Features from Cheap Operations Kai Han, Yunhe Wang, Qi Tian, Jianyuan Guo, Chunjing Xu, Chang Xu

ViT

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby

DeiT

Training data-efficient image transformers & distillation through attention Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou

PVT

Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao

T2T

Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet Li Yuan, Yunpeng Chen, Tao Wang, Weihao Yu, Yujun Shi, Zihang Jiang, Francis EH Tay, Jiashi Feng, Shuicheng Yan

DeepVit

DeepViT: Towards Deeper Vision Transformer Daquan Zhou, Bingyi Kang, Xiaojie Jin, Linjie Yang, Xiaochen Lian, Zihang Jiang, Qibin Hou, and Jiashi Feng.

ViL

Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding Pengchuan Zhang, Xiyang Dai, Jianwei Yang, Bin Xiao, Lu Yuan, Lei Zhang, Jianfeng Gao

TNT

Transformer in Transformer Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu, Yunhe Wang

CvT

CvT: Introducing Convolutions to Vision Transformers Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang

CViT

CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification Chun-Fu (Richard) Chen, Quanfu Fan, Rameswar Panda

Focal-T

Focal Attention for Long-Range Interactions in Vision Transformers Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Xiyang Dai, Bin Xiao, Lu Yuan, Jianfeng Gao

Twins

Twins: Revisiting the Design of Spatial Attention in Vision Transformers

PVTv2

Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao