• Stars
    star
    915
  • Rank 49,576 (Top 1.0 %)
  • Language
  • License
    MIT License
  • Created about 6 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Collection of recent methods on (deep) neural network compression and acceleration.

EfficientDNNs

A collection of recent methods on DNN compression and acceleration. There are mainly 5 kinds of methods for efficient DNNs:

  • neural architecture re-design or search (NAS)
    • maintain accuracy, less cost (e.g., #Params, #FLOPs, etc.): MobileNet, ShuffleNet etc.
    • maintain cost, more accuracy: Inception, ResNeXt, Xception etc.
  • pruning (including structured and unstructured)
  • quantization
  • matrix/low-rank decomposition
  • knowledge distillation (KD)

Note, this repo is more about pruning (with lottery ticket hypothesis or LTH as a sub-topic), KD, and quantization. For other topics like NAS, see more comprehensive collections (## Related Repos and Websites) at the end of this file. Welcome to send a pull request if you'd like to add any pertinent papers.

Other repos:

  • LTH (lottery ticket hypothesis) and its broader version, pruning at initialization (PaI), now is at the frontier of network pruning. We single out the PaI papers to this repo. Welcome to check it out!
  • Awesome-Efficient-ViT for a curated list of efficient vision transformers.

About abbreviation: In the list below, o for oral, s for spotlight, b for best paper, w for workshop.

Surveys

Papers [Pruning and Quantization]

1980s,1990s

2000s

2011

2013

2014

2015

2016

2017

2018

2019

2020

2021

2022

2023


Papers [Actual Acceleration via Sparsity]


Papers [Lottery Ticket Hypothesis (LTH)]

For LTH and other Pruning at Initialization papers, please refer to Awesome-Pruning-at-Initialization.


Papers [Bayesian Compression]

Papers [Knowledge Distillation (KD)]

Before 2014

2014

2016

2017

2018

2019

2020

2021

2022

Papers [AutoML (NAS etc.)]

Papers [Interpretability]

Workshops

Books & Courses

Lightweight DNN Engines/APIs

Related Repos and Websites

More Repositories

1

Collaborative-Distillation

[CVPR'20] Collaborative Distillation for Ultra-Resolution Universal Style Transfer (PyTorch)
Python
180
star
2

Regularization-Pruning

[ICLR'21] Neural Pruning via Growing Regularization (PyTorch)
Python
73
star
3

ASSL

[NeurIPS'21 Spotlight] Aligned Structured Sparsity Learning for Efficient Image Super-Resolution (PyTorch)
Python
59
star
4

Awesome-Pruning-at-Initialization

[IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.
44
star
5

Smile-Pruning

A generic code base for neural network pruning, especially for pruning at initialization.
Python
30
star
6

Good-DA-in-KD

[NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective
Python
29
star
7

Why-the-State-of-Pruning-so-Confusing

[Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruning
29
star
8

smilelogging

Python logging package for easy reproducible experimenting in research
Python
25
star
9

TPP

[ICLR'23] Trainability Preserving Neural Pruning (PyTorch)
Python
23
star
10

Awesome-Efficient-ViT

Recent Advances on Efficient Vision Transformers
22
star
11

SRP

[ICLR'22] PyTorch code for our paper "Learning Efficient Image Super-Resolution Networks via Structure-Regularized Pruning"
Python
18
star
12

Caffe_IncReg

[IJCNN'19, IEEE JSTSP'19] Caffe code for our paper "Structured Pruning for Efficient ConvNets via Incremental Regularization"; [BMVC'18] "Structured Probabilistic Pruning for Convolutional Neural Network Acceleration"
Makefile
14
star
13

WritingTips

"Good scientific writing is not a matter of life and death; it is much more serious than that."
TeX
7
star
14

Efficient-NeRF

Python
7
star
15

UtilsHub

Python
3
star
16

LowlevelVision

paper collection for low-level vision
3
star
17

AdversarialAttacks

2
star