• Stars
    star
    220
  • Rank 180,422 (Top 4 %)
  • Language
    Python
  • License
    MIT License
  • Created over 1 year ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without Forgetting".

Self-regulating Prompts: Foundational Model Adaptation without Forgetting [ICCV 2023]

Self-regulating Prompts: Foundational Model Adaptation without Forgetting
Muhammad Uzair Khattak*, Syed Talal Wasim*, Muzammal Naseer, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan

*Joint first authors

paper Website video slides

Official implementation of the paper "Self-regulating Prompts: Foundational Model Adaptation without Forgetting".


PWC PWC PWC PWC PWC


🚀 News

  • (July 14, 2023)
    • Our work is accepted to ICCV 2023! 🎉
  • (July 12, 2023)

Highlights

main figure

Left: Existing prompt learning approaches for foundational Vision-Language models like CLIP rely on task-specific objectives that restrict prompt learning to learn a feature space suitable only for downstream tasks and consequently lose the generalized knowledge of CLIP (shown in purple). Our self-regulating framework explicitly guides the training trajectory of prompts towards the closest point between two optimal solution manifolds (solid line) to learn task-specific representations while also retaining generalized CLIP knowledge (shown in green). Middle: Averaged across 11 image recognition datasets, PromptSRC surpasses existing methods on the base-to-novel generalization setting. Right: We evaluate our approach on four diverse image recognition benchmarks for CLIP and show consistent gains over previous state-of-the-art approaches.

Abstract: Prompt learning has emerged as an efficient alternative for fine-tuning foundational models, such as CLIP, for various downstream tasks. Conventionally trained using the task-specific objective, i.e., cross-entropy loss, prompts tend to overfit downstream data distributions and find it challenging to capture task-agnostic general features from the frozen CLIP. This leads to the loss of the model's original generalization capability. To address this issue, our work introduces a self-regularization framework for prompting called PromptSRC (Prompting with Self-regulating Constraints). PromptSRC guides the prompts to optimize for both task-specific and task-agnostic general representations using a three-pronged approach by: (a) regulating {prompted} representations via mutual agreement maximization with the frozen model, (b) regulating with self-ensemble of prompts over the training trajectory to encode their complementary strengths, and (c) regulating with textual diversity to mitigate sample diversity imbalance with the visual branch. To the best of our knowledge, this is the first regularization framework for prompt learning that avoids overfitting by jointly attending to pre-trained model features, the training trajectory during prompting, and the textual diversity. PromptSRC explicitly steers the prompts to learn a representation space that maximizes performance on downstream tasks without compromising CLIP generalization. We perform experiments on 4 benchmarks where PromptSRC performs favorably well compared to the existing methods. Our code and pre-trained models are publicly available.

Regularization Framework for Prompt Learning

We propose PromptSRC (Prompting with Self-regulating Constraints) which steers the prompts to learn a representation space that maximizes performance on downstream tasks without compromising CLIP generalization.

Key components of PromptSRC:

  1. Mutual agreement maximization: PromptSRC explicitly guides the prompts to jointly acquire both task-specific knowledge and task-agnostic generalized knowledge by maximizing the mutual agreement between prompted and features of the frozen VL model.
  2. Gaussian weighted prompt aggregation: We propose a weighted self-ensembling strategy for prompts over the training trajectory that captures complementary features and enhances their generalization abilities.
  3. Textual diversity: PromptSRC regulates prompts with textual diversity to mitigate sample diversity imbalance compared to the visual branch during training.

☑️ Supported Methods

Method Paper Configs Training Scripts
PromptSRC arXiv link link
Independent V-L Prompting - link link
MaPLe CVPR 2023 link link
CoOp IJCV 2022 link link
Co-CoOp CVPR 2022 link link

Results

Results reported below show accuracy for base and novel classes for across 11 recognition datasets averaged over 3 seeds.

Effectiveness of PromptSRC in comparison with baseline Independent V-L Prompting

PromptSRC effectively maximizes supervised task performance (base classes) without compromising on CLIP's original generalization towards new unseen tasks (novel classes).

Name Base Acc. Novel Acc. HM Epochs
CLIP 69.34 74.22 71.70 -
Independent V-L Prompting 84.21 71.79 77.51 20
PromptSRC (ours) 84.26 76.10 79.97 20

PromptSRC in comparison with existing state-of-the-art

Name Base Acc. Novel Acc. HM Epochs
CLIP 69.34 74.22 71.70 -
CoOp 82.69 63.22 71.66 200
CoCoOp 80.47 71.69 75.83 10
ProDA 81.56 75.83 76.65 100
MaPLe 82.28 75.14 78.55 5
PromptSRC (ours) 84.26 76.10 79.97 20

Installation

For installation and other package requirements, please follow the instructions detailed in INSTALL.md.

Data Preparation

Please follow the instructions at DATASETS.md to prepare all datasets.

Model Zoo

Vision-Language prompting methods

Name (configs) Model checkpoints
Independent V-L Prompting link
PromptSRC link

Evaluation

Please refer to the EVAL.md for detailed instructions on using the evaluation scripts and reproducing the official results using our pre-trained models.

Training

Please refer to the TRAIN.md for detailed instructions on training PromptSRC and IVLP baseline from scratch.


Citation

If you find our work, this repository, or pretrained models useful, please consider giving a star ⭐ and citation.

@InProceedings{Khattak_2023_ICCV,
    author    = {Khattak, Muhammad Uzair and Wasim, Syed Talal and Naseer, Muzammal and Khan, Salman and Yang, Ming-Hsuan and Khan, Fahad Shahbaz},
    title     = {Self-regulating Prompts: Foundational Model Adaptation without Forgetting},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2023},
    pages     = {15190-15200}
}

Contact

If you have any questions, please create an issue on this repository or contact at [email protected] or [email protected].

Acknowledgements

Our code is based on MaPLe, along with Co-CoOp and CoOp repository. We thank the authors for releasing their code. If you use our model and code, please consider citing these works as well.

More Repositories

1

multimodal-prompt-learning

[CVPR 2023] Official repository of paper titled "MaPLe: Multi-modal Prompt Learning".
Python
629
star
2

ViFi-CLIP

[CVPR 2023] Official repository of paper titled "Fine-tuned CLIP models are efficient video learners".
Python
240
star
3

ProText

[CVPRW 2024] Official repository of paper titled "Learning to Prompt with Text Only Supervision for Vision-Language Models".
Python
80
star
4

transformers-transforming-vision

Validating image classification benchmark results on ViTs and ResNets (v2)
Python
13
star
5

OD-Satellite-iSAID

Object detection with Satellite Images
Python
10
star
6

ImageRecognition-NVIDIA-Jetson

PyTorch scripts of ResNet50: performance metrics are evaluated on Jetson Nano and on my GTX 1060 powered laptop (Asus GL702VM)
Python
4
star
7

facial-mask-detector-MTCNN

Pytorch based custom NN and tensorflow based MTCNN face detection algorithm Facial Mask Detector
Jupyter Notebook
3
star
8

ConvNext-FGVC

Benchmarking ConvNeXts for FGVC datasets
Python
3
star
9

muzairkhattak.github.iouzair

My personal website
HTML
3
star
10

CLIP_contextualAnalysis

Experimentation on CLIP's classification behaviour by changing prompts and contextual information
Jupyter Notebook
3
star
11

Machine-Learning-Course-Assigments-CS-470-

Solved coding assignments for my CS470 ML course (during my bachelors)
Jupyter Notebook
2
star
12

muzairkhattak.github.io

HTML
2
star
13

Image-Stitching-Results

Stiched WSI images using our custom Image Stitching software
2
star
14

proposals_visualizer_fasterrcnn

Jupyter notebook for visualizing RPN proposals of a trained Faster RCNN on given sample images
Jupyter Notebook
2
star
15

PyTorch

Jupyter Notebook
1
star
16

Plant_pathology-Kaggle-Competition

fastai dense-net and pytorch resnet model implementation which gives upto 96% of test accuracy on the test data of Kaggle plants-pathology competition
Jupyter Notebook
1
star
17

muzairkhattak

1
star
18

Coursera-deeplearning.ai-assigments

Solved assignments of the Andrew Ng's Deep learning Specialization course on Coursera/deeplearning.ai
Jupyter Notebook
1
star
19

Digital-Signal-Processing_MATLAB

Different Digital Signal processing codes and algorithms implemented in MATLAB
MATLAB
1
star
20

chatbot-PyTorch-

Uploaded a jupyter notebook in which a chatbot is implemented using PyTorch Framework.
Jupyter Notebook
1
star
21

ML-Algorithms-using-Libraries-

This repo consists of ML algorithms using popular ML/DL Libraries
Jupyter Notebook
1
star
22

Linear-Control-System-using-MATLAB

I will upload MATLAB codes for performing different Linear Control systems functions , and applications
MATLAB
1
star
23

PAK-COVID-19-Citywise-and-District-wise-analysis

A python based script in which PAK-COVID-19 dataset from Kaggle is used for Records Visualization and preprocessing of Data..
Jupyter Notebook
1
star
24

TensorRT-with-PyTorch

Complete code files, with proper documentation on how to use PyTorch Models with TensorRT optimization for PCs, Jetson TX2 and Jetson Nano
Python
1
star