• Stars
    star
    199
  • Rank 194,915 (Top 4 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created about 2 years ago
  • Updated 12 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

'NKD and USKD' (ICCV 2023) and 'ViTKD' (CVPRW 2024)

Knowledge Distillation for Image Classification

This repository includes official implementation for the following papers:

  • ICCV 2023: NKD and USKD: From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels

  • ViTKD: ViTKD: Practical Guidelines for ViT feature knowledge distillation

It also provides unofficial implementation for the following papers:

If this repository is helpful, please give us a star and cite relevant papers.

Install

  • Prepare the dataset in data/imagenet
  • # Set environment
    pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html
    pip install -r requirements.txt
    
  • This repo uses mmcls = 1.0.0rc6. If you want to use lower mmcls version for distillation, you can refer branch master to change the codes.

Run

  • Please refer nkd.md and vitkd.md to train the student and get the weight.
  • You can modify the configs to choose different distillation methods and pairs.
  • The implementation details of different methods can be seen in the folder dis_losses.

Citing NKD and USKD

@article{yang2023knowledge,
  title={From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels},
  author={Yang, Zhendong and Zeng, Ailing and Li, Zhe and Zhang, Tianke and Yuan, Chun and Li, Yu},
  journal={arXiv preprint arXiv:2303.13005},
  year={2023}
}

Citing ViTKD

@article{yang2022vitkd,
  title={ViTKD: Practical Guidelines for ViT feature knowledge distillation},
  author={Yang, Zhendong and Li, Zhe and Zeng, Ailing and Li, Zexian and Yuan, Chun and Li, Yu},
  journal={arXiv preprint arXiv:2209.02432},
  year={2022}
}

Acknowledgement

Our code is based on the project MMPretrain.