• Stars
    star
    460
  • Rank 95,202 (Top 2 %)
  • Language
    Python
  • Created about 3 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

CLIP-Adapter: Better Vision-Language Models with Feature Adapters

Official implementation of 'CLIP-Adapter: Better Vision-Language Models with Feature Adapters'.

Introduction

CLIP-Adapter is a drop-in module designed for CLIP on few-shot classfication tasks. CLIP-Adapter can improve the few-shot classfication of CLIP with very simple design.

Requirements

We utilize the code base of CoOp. Please follow their instructions to prepare the environment and datasets.

Get Started

Put clip_adapter.py under ./trainers and add the related import codes. Then follow CoOp's command to run for ImageNet.

The complete codes will be released soon.

New version of CLIP-Adapter

Please check Tip-Adapter: Training-free CLIP-Adapter.

Contributors

Renrui Zhang, Peng Gao

Acknowledegement

This repo benefits from CLIP and CoOp. Thanks for their wonderful works.

Citation

@article{gao2021clip,
  title={CLIP-Adapter: Better Vision-Language Models with Feature Adapters},
  author={Gao, Peng and Geng, Shijie and Zhang, Renrui and Ma, Teli and Fang, Rongyao and Zhang, Yongfeng and Li, Hongsheng and Qiao, Yu},
  journal={arXiv preprint arXiv:2110.04544},
  year={2021}
}

Contact

If you have any question about this project, please feel free to contact [email protected] and [email protected].