torchdistill: A Modular, Configuration-Driven Framework for Knowledge Distillation
torchdistill (formerly kdkit) offers various state-of-the-art knowledge distillation methods and enables you to design (new) experiments simply by editing a declarative yaml config file instead of Python code. Even when you need to extract intermediate representations in teacher/student models, you will NOT need to reimplement the models, that often change the interface of the forward, but instead specify the module path(s) in the yaml file. Refer to this paper for more details.
In addition to knowledge distillation, this framework helps you design and perform general deep learning experiments (WITHOUT coding) for reproducible deep learning studies. i.e., it enables you to train models without teachers simply by excluding teacher entries from a declarative yaml config file. You can find such examples below and in configs/sample/.
When you refer to torchdistill in your paper, please cite this paper
instead of this GitHub repository.
If you use torchdistill as part of your work, your citation is appreciated and motivates me to maintain and upgrade this framework!
Important Notice
To run the scripts in examples/
, please use a PyPI packages (i.e., pip3 install torchdistill
) instead of local package files in torchdistill/
because
I am preparing for next major release, and the example scripts have not been synced with the local package files in torchdistill/
On top of that, you can add your modules (models, loss functions, datasets, etc) without editing code in the local package torchdistill/
(See Discussions for more details)
While waiting for the next major release of torchdistill, I strongly suggest that you
- use torchdistill v0.3.3 (
pip install torchdistill
) with torchvision=<v0.13.1 - use executable scripts under
examples/legacy/
- refer to
configs/legacy/
Forward hook manager
Using ForwardHookManager, you can extract intermediate representations in model without modifying the interface of its forward function.
This example notebook
will give you a better idea of the usage such as knowledge distillation and analysis of intermediate representations.
1 experiment → 1 declarative PyYAML config file
In torchdistill, many components and PyTorch modules are abstracted e.g., models, datasets, optimizers, losses, and more! You can define them in a declarative PyYAML config file so that can be seen as a summary of your experiment, and in many cases, you will NOT need to write Python code at all. Take a look at some configurations available in configs/. You'll see what modules are abstracted and how they are defined in a declarative PyYAML config file to design an experiment.
Top-1 validation accuracy for ILSVRC 2012 (ImageNet)
T: ResNet-34* | Pretrained | KD | AT | FT | CRD | Tf-KD | SSKD | L2 | PAD-L2 | KR |
---|---|---|---|---|---|---|---|---|---|---|
S: ResNet-18 | 69.76* | 71.37 | 70.90 | 71.56 | 70.93 | 70.52 | 70.09 | 71.08 | 71.71 | 71.64 |
Original work | N/A | N/A | 70.70 | 71.43** | 71.17 | 70.42 | 71.62 | 70.90 | 71.71 | 71.61 |
* The pretrained ResNet-34 and ResNet-18 are provided by torchvision.
** FT is assessed with ILSVRC 2015 in the original work.
For the 2nd row (S: ResNet-18), most of the results are reported in this paper,
and their checkpoints (trained weights), configuration and log files are available,
and the configurations reuse the hyperparameters such as number of epochs used in the original work except for KD.
Examples
Executable code can be found in examples/ such as
- Image classification: ImageNet (ILSVRC 2012), CIFAR-10, CIFAR-100, etc
- Object detection: COCO 2017, etc
- Semantic segmentation: COCO 2017, PASCAL VOC, etc
- Text classification: GLUE, etc
For CIFAR-10 and CIFAR-100, some models are reimplemented and available as pretrained models in torchdistill. More details can be found here.
Some Transformer models fine-tuned by torchdistill for GLUE tasks are available at Hugging Face Model Hub. Sample GLUE benchmark results and details can be found here.
Google Colab Examples
The following examples are available in demo/. Note that these examples are for Google Colab users. Usually, examples/ would be a better reference if you have your own GPU(s).
CIFAR-10 and CIFAR-100
GLUE
These examples write out test prediction files for you to see the test performance at the GLUE leaderboard system.
PyTorch Hub
If you find models on PyTorch Hub or GitHub repositories supporting PyTorch Hub, you can import them as teacher/student models simply by editing a declarative yaml config file.
e.g., If you use a pretrained ResNeSt-50 available in huggingface/pytorch-image-models (aka timm) as a teacher model for ImageNet dataset, you can import the model via PyTorch Hub with the following entry in your declarative yaml config file.
models:
teacher_model:
name: 'resnest50d'
repo_or_dir: 'huggingface/pytorch-image-models'
kwargs:
num_classes: 1000
pretrained: True
How to setup
- Python >= 3.7
- pipenv (optional)
Install by pip/pipenv
pip3 install torchdistill
# or use pipenv
pipenv install torchdistill
Install from this repository (not recommended)
git clone https://github.com/yoshitomo-matsubara/torchdistill.git
cd torchdistill/
pip3 install -e .
# or use pipenv
pipenv install "-e ."
Issues / Questions / Requests
The documentation is work-in-progress. In the meantime, feel free to create an issue if you find a bug.
If you have either a question or feature request, start a new discussion here.
Please make sure the issue/question/request has not been addressed yet by searching through the issues and discussions.
Citation
If you use torchdistill in your research, please cite the following paper.
[Paper] [Preprint]
@inproceedings{matsubara2021torchdistill,
title={{torchdistill: A Modular, Configuration-Driven Framework for Knowledge Distillation}},
author={Matsubara, Yoshitomo},
booktitle={International Workshop on Reproducible Research in Pattern Recognition},
pages={24--44},
year={2021},
organization={Springer}
}
Acknowledgments
Since June 2022, this project has been supported by JetBrain's Free License Programs (Open Source).
References
- 🔍 pytorch/vision/references/classification/
- 🔍 pytorch/vision/references/detection/
- 🔍 pytorch/vision/references/segmentation/
- 🔍 huggingface/transformers/examples/pytorch/text-classification
- 🔍 Geoffrey Hinton, Oriol Vinyals and Jeff Dean. "Distilling the Knowledge in a Neural Network" (Deep Learning and Representation Learning Workshop: NeurIPS 2014)
- 🔍 Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta and Yoshua Bengio. "FitNets: Hints for Thin Deep Nets" (ICLR 2015)
- 🔍 Junho Yim, Donggyu Joo, Jihoon Bae and Junmo Kim. "A Gift From Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning" (CVPR 2017)
- 🔍 Sergey Zagoruyko and Nikos Komodakis. "Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer" (ICLR 2017)
- 🔍 Nikolaos Passalis and Anastasios Tefas. "Learning Deep Representations with Probabilistic Knowledge Transfer" (ECCV 2018)
- 🔍 Jangho Kim, Seonguk Park and Nojun Kwak. "Paraphrasing Complex Network: Network Compression via Factor Transfer" (NeurIPS 2018)
- 🔍 Byeongho Heo, Minsik Lee, Sangdoo Yun and Jin Young Choi. "Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons" (AAAI 2019)
- 🔍 Tong He, Chunhua Shen, Zhi Tian, Dong Gong, Changming Sun, Youliang Yan. "Knowledge Adaptation for Efficient Semantic Segmentation" (CVPR 2019)
- 🔍 Wonpyo Park, Dongju Kim, Yan Lu and Minsu Cho. "Relational Knowledge Distillation" (CVPR 2019)
- 🔍 Sungsoo Ahn, Shell Xu Hu, Andreas Damianou, Neil D. Lawrence and Zhenwen Dai. "Variational Information Distillation for Knowledge Transfer" (CVPR 2019)
- 🔍 Yoshitomo Matsubara, Sabur Baidya, Davide Callegaro, Marco Levorato and Sameer Singh. "Distilled Split Deep Neural Networks for Edge-Assisted Real-Time Systems" (Workshop on Hot Topics in Video Analytics and Intelligent Edges: MobiCom 2019)
- 🔍 Baoyun Peng, Xiao Jin, Jiaheng Liu, Dongsheng Li, Yichao Wu, Yu Liu, Shunfeng Zhou and Zhaoning Zhang. "Correlation Congruence for Knowledge Distillation" (ICCV 2019)
- 🔍 Frederick Tung and Greg Mori. "Similarity-Preserving Knowledge Distillation" (ICCV 2019)
- 🔍 Yonglong Tian, Dilip Krishnan and Phillip Isola. "Contrastive Representation Distillation" (ICLR 2020)
- 🔍 Yoshitomo Matsubara and Marco Levorato. "Neural Compression and Filtering for Edge-assisted Real-time Object Detection in Challenged Networks" (ICPR 2020)
- 🔍 Li Yuan, Francis E.H.Tay, Guilin Li, Tao Wang and Jiashi Feng. "Revisiting Knowledge Distillation via Label Smoothing Regularization" (CVPR 2020)
- 🔍 Guodong Xu, Ziwei Liu, Xiaoxiao Li and Chen Change Loy. "Knowledge Distillation Meets Self-Supervision" (ECCV 2020)
- 🔍 Youcai Zhang, Zhonghao Lan, Yuchen Dai, Fangao Zeng, Yan Bai, Jie Chang and Yichen Wei. "Prime-Aware Adaptive Distillation" (ECCV 2020)
- 🔍 Pengguang Chen, Shu Liu, Hengshuang Zhao, Jiaya Jia. "Distilling Knowledge via Knowledge Review" (CVPR 2021)