• Stars
    star
    450
  • Rank 97,143 (Top 2 %)
  • Language
  • Created about 2 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Collection of awesome parameter-efficient fine-tuning resources.


Awesome Parameter-Efficient Transfer Learning

A curated list of classic awesome parameter-efficient transfer learning methods.

       

Problem

Code Library

Coming soon....

Papers

Survey

  • Visual Tuning, Arxiv 2023.

    Yu, Bruce XB and Chang, Jianlong and Wang, Haixin and Liu, Lingbo and Wang, Shijie and Wang, Zhiyu and Lin, Junfan and Xie, Lingxi and Li, Haojie and Lin, Zhouchen and others.

    [Paper][Code]

Prompt

  • Learning to Prompt for Vision-Language Models, IJCV 2022.

    Kaiyang Zhou, Jingkang Yang, Chen Change Loy, Ziwei Liu.

    [Paper][Code]

  • Visual Prompt Tuning, ECCV 2022.

    Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, Ser-Nam Lim.

    [Paper][Code]

  • Hyperprompt: Prompt-based task-conditioning of transformers, ICML 2022.

    He, Yun and Zheng, Steven and Tay, Yi and Gupta, Jai and Du, Yu and Aribandi, Vamsi and Zhao, Zhe and Li, YaGuang and Chen, Zhao and Metzler, Donald and others.

    [Paper][Code]

  • MaPLe: Multi-modal Prompt Learning, CVPR 2023.

    Khattak, Muhammad Uzair and Rasheed, Hanoona and Maaz, Muhammad and Khan, Salman and Khan, Fahad Shahbaz.

    [Paper][Code]

  • Hierarchical Prompt Learning for Multi-Task Learning, CVPR 2023.

    Liu, Yajing and Lu, Yuning and Liu, Hao and An, Yaozu and Xu, Zhuoran and Yao, Zhuokun and Zhang, Baofeng and Xiong, Zhiwei and Gui, Chenguang.

    [Paper][Code]

  • Visual Exemplar Driven Task-Prompting for Unified Perception in Autonomous Driving, CVPR 2023.

    Liang, Xiwen and Niu, Minzhe and Han, Jianhua and Xu, Hang and Xu, Chunjing and Liang, Xiaodan.

    [Paper][Code]

  • Dual Modality Prompt Tuning for Vision-Language Pre-Trained Model, TMM 2023.

    Xing, Yinghui and Wu, Qirui and Cheng, De and Zhang, Shizhou and Liang, Guoqiang and Wang, Peng and Zhang, Yanning.

    [Paper][Code]

Adapter

  • Parameter-efficient multi-task fine-tuning for transformers via shared hypernetworks, ACL 2021.

    Mahabadi, Rabeeh Karimi and Ruder, Sebastian and Dehghani, Mostafa and Henderson, James.

    [Paper][Code]

  • Compacter: Efficient Low-Rank Hypercomplex Adapter Layer, NeurIPS 2021.

    Karimi Mahabadi, Rabeeh and Henderson, James and Ruder, Sebastian.

    [Paper][Code]

  • AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition, NeurIPS 2022.

    Shoufa Chen, Chongjian Ge, Zhan Tong, Jiangliu Wang, Yibing Song, Jue Wang, Ping Luo.

    [Paper][Code]

  • Polyhistor: Parameter-Efficient Multi-Task Adaptation for Dense Vision Tasks, NeurIPS 2022.

    Yen-Cheng Liu, Chih-Yao Ma, Junjiao Tian, Zijian He, Zsolt Kira.

    [Paper][Code]

  • Parameter-efficient and student-friendly knowledge distillation, NeurIPS 2022.

    Rao, Jun and Meng, Xv and Ding, Liang and Qi, Shuhan and Tao, Dacheng.

    [Paper][Code]

  • Vl-adapter: Parameter-efficient transfer learning for vision-and-language tasks, CVPR 2022.

    Sung, Yi-Lin and Cho, Jaemin and Bansal, Mohit.

    [Paper][Code]

  • 1% VS 100%: Parameter-Efficient Low Rank Adapter for Dense Predictions, CVPR 2023.

    Yin, Dongshuo and Yang, Yiran and Wang, Zhechao and Yu, Hongfeng and Wei, Kaiwen and Sun, Xian.

    [Paper][Code]

  • Vision transformer adapter for dense predictions. ICLR 2023.

    Chen, Zhe and Duan, Yuchen and Wang, Wenhai and He, Junjun and Lu, Tong and Dai, Jifeng and Qiao, Yu.

    [Paper][Code]

  • UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling, ArXiv2023.

    Haoyu Lu, Mingyu Ding, Yuqi Huo, Guoxing Yang, Zhiwu Lu, Masayoshi Tomizuka, Wei Zhan.

    [Paper][Code]

Parameter

  • LoRA: Low-Rank Adaptation of Large Language Models. NeurIPS 2021.

    Hu, Edward J and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Wang, Lu and Chen, Weizhu.

    [Paper][Code]

  • Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning, NeurIPS 2022.

    Dongze Lian, Daquan Zhou, Jiashi Feng, Xinchao Wang.

    [Paper][Code]

  • BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models. ACL 2022.

    Zaken, Elad Ben and Ravfogel, Shauli and Goldberg, Yoav.

    [Paper][Code]

  • Parameter-efficient Model Adaptation for Vision Transformers. AAAI 2023.

    He, Xuehai and Li, Chunyuan and Zhang, Pengchuan and Yang, Jianwei and Wang, Xin Eric.

    [Paper][Code]

Unified

  • Towards a Unified View of Parameter-Efficient Transfer Learning, ICLR 2022.

    Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, Graham Neubig.

    [Paper][Code]

  • Towards a Unified View on Visual Parameter-Efficient Transfer Learning, Arxiv 2023.

    Yu, Bruce XB and Chang, Jianlong and Liu, Lingbo and Tian, Qi and Chen, Chang Wen.

    [Paper][Code]

Others

  • LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning, NeurIPS 2022.

    Sung, Yi-Lin and Cho, Jaemin and Bansal, Mohit.

    [Paper][Code]

  • Important Channel Tuning, Openreview 2023.

    Hengyuan Zhao, Pichao WANG, Yuyang Zhao, Fan Wang, Mike Zheng Shou.

    [Paper][Code]

  • Revisit Parameter-Efficient Transfer Learning: A Two-Stage Paradigm, Arxiv 2023.

    Zhao, Hengyuan and Luo, Hao and Zhao, Yuyang and Wang, Pichao and Wang, Fan and Shou, Mike Zheng.

    [Paper][Code]