• Stars
    star
    629
  • Rank 71,454 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created about 2 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[CVPR 2023] Official repository of paper titled "MaPLe: Multi-modal Prompt Learning".

MaPLe: Multi-modal Prompt Learning [CVPR 2023]

MaPLe: Multi-modal Prompt Learning
Muhammad Uzair Khattak, Hanoona Rasheed, Muhammad Maaz, Salman Khan, Fahad Shahbaz Khan

Website paper video slides

Official implementation of the paper "MaPLe: Multi-modal Prompt Learning".


Base-to-novel generalization:

PWC PWC PWC PWC PWC

Domain Generalization:

PWC PWC PWC


๐Ÿš€ News


Highlights

main figure

Abstract: Pre-trained vision-language (V-L) models such as CLIP have shown excellent generalization ability to downstream tasks. However, they are sensitive to the choice of input text prompts and require careful selection of prompt templates to perform well. Inspired by the Natural Language Processing (NLP) literature, recent CLIP adaptation approaches learn prompts as the textual inputs to fine-tune CLIP for downstream tasks. We note that using prompting to adapt representations in a single branch of CLIP (language or vision) is sub-optimal since it does not allow the flexibility to dynamically adjust both representation spaces on a downstream task. In this work, we propose Multi-modal Prompt Learning (MaPLe) for both vision and language branches to improve alignment between the vision and language representations. Our design promotes strong coupling between the vision-language prompts to ensure mutual synergy and discourages learning independent uni-modal solutions. Further, we learn separate prompts across different early stages to progressively model the stage-wise feature relationships to allow rich context learning. We evaluate the effectiveness of our approach on three representative tasks of generalization to novel classes, new target datasets and unseen domain shifts. Compared with the state-of-the-art method Co-CoOp, MaPLe exhibits favorable performance and achieves an absolute gain of 3.45% on novel classes and 2.72% on overall harmonic-mean, averaged over 11 diverse image recognition datasets. Our code and models will be publicly released.

Main Contributions

  1. Multi-modal prompt learning: Adapt CLIP using a novel prompting technique which prompts both the vision and language branch of CLIP.
  2. Vision and Language Prompt Coupling: Explicitly condition vision prompts on their language counterparts and act as a bridge between the two modalities by allowing mutual propagation of gradients to promote synergy.
  3. Vision and Language Deep Prompting: Learn multi-modal prompts across multiple transformer blocks in both vision and language branches to progressively learn the synergistic behaviour of both modalities.

โ˜‘๏ธ Supported Methods

Method Paper Configs Training Scripts
MaPLe CVPR 2023 link link
CoOp IJCV 2022 link link
Co-CoOp CVPR 2022 link link
Deep Vision Prompting - link link
Deep Language Prompting - link link
Independent V-L Prompting - link link

Results

MaPLe in comparison with existing methods

Results reported below show accuracy for base and novel classes for across 11 recognition datasets averaged over 3 seeds.

Name Base Acc. Novel Acc. HM Epochs
CLIP 69.34 74.22 71.70 -
CoOp 82.69 63.22 71.66 200
CoCoOp 80.47 71.69 75.83 10
MaPLe (ours) 82.28 75.14 78.55 5

Installation

For installation and other package requirements, please follow the instructions detailed in INSTALL.md.

Data preparation

Please follow the instructions at DATASETS.md to prepare all datasets.

Model Zoo

Vision-Language prompting methods

Name (configs) Base Acc. Novel Acc. HM Epochs Model / Logs
Deep Vision Prompting 80.24 73.43 76.68 5 link
Deep Language Prompting 81.72 73.81 77.56 5 link
Independent V-L Prompting 82.15 74.07 77.90 5 link
MaPLe 82.28 75.14 78.55 5 link

Training and Evaluation

Please refer to the RUN.md for detailed instructions on training, evaluating and reproducing the results using our pre-trained models.


Citation

If you use our work, please consider citing:

@inproceedings{khattakMaPLe,
    title={MaPLe: Multi-modal Prompt Learning},
    author={khattak, Muhammad Uzair and Rasheed, Hanoona and Maaz, Muhammad and Khan, Salman and Khan, Fahad Shahbaz},
    booktitle={The IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    year={2023}
}

Contact

If you have any questions, please create an issue on this repository or contact at [email protected] or [email protected].

Acknowledgements

Our code is based on Co-CoOp and CoOp repository. We thank the authors for releasing their code. If you use our model and code, please consider citing these works as well.

More Repositories

1

ViFi-CLIP

[CVPR 2023] Official repository of paper titled "Fine-tuned CLIP models are efficient video learners".
Python
240
star
2

PromptSRC

[ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without Forgetting".
Python
220
star
3

ProText

[CVPRW 2024] Official repository of paper titled "Learning to Prompt with Text Only Supervision for Vision-Language Models".
Python
80
star
4

transformers-transforming-vision

Validating image classification benchmark results on ViTs and ResNets (v2)
Python
13
star
5

OD-Satellite-iSAID

Object detection with Satellite Images
Python
10
star
6

ImageRecognition-NVIDIA-Jetson

PyTorch scripts of ResNet50: performance metrics are evaluated on Jetson Nano and on my GTX 1060 powered laptop (Asus GL702VM)
Python
4
star
7

facial-mask-detector-MTCNN

Pytorch based custom NN and tensorflow based MTCNN face detection algorithm Facial Mask Detector
Jupyter Notebook
3
star
8

ConvNext-FGVC

Benchmarking ConvNeXts for FGVC datasets
Python
3
star
9

muzairkhattak.github.iouzair

My personal website
HTML
3
star
10

CLIP_contextualAnalysis

Experimentation on CLIP's classification behaviour by changing prompts and contextual information
Jupyter Notebook
3
star
11

Machine-Learning-Course-Assigments-CS-470-

Solved coding assignments for my CS470 ML course (during my bachelors)
Jupyter Notebook
2
star
12

muzairkhattak.github.io

HTML
2
star
13

Image-Stitching-Results

Stiched WSI images using our custom Image Stitching software
2
star
14

proposals_visualizer_fasterrcnn

Jupyter notebook for visualizing RPN proposals of a trained Faster RCNN on given sample images
Jupyter Notebook
2
star
15

PyTorch

Jupyter Notebook
1
star
16

Plant_pathology-Kaggle-Competition

fastai dense-net and pytorch resnet model implementation which gives upto 96% of test accuracy on the test data of Kaggle plants-pathology competition
Jupyter Notebook
1
star
17

muzairkhattak

1
star
18

Coursera-deeplearning.ai-assigments

Solved assignments of the Andrew Ng's Deep learning Specialization course on Coursera/deeplearning.ai
Jupyter Notebook
1
star
19

Digital-Signal-Processing_MATLAB

Different Digital Signal processing codes and algorithms implemented in MATLAB
MATLAB
1
star
20

chatbot-PyTorch-

Uploaded a jupyter notebook in which a chatbot is implemented using PyTorch Framework.
Jupyter Notebook
1
star
21

ML-Algorithms-using-Libraries-

This repo consists of ML algorithms using popular ML/DL Libraries
Jupyter Notebook
1
star
22

Linear-Control-System-using-MATLAB

I will upload MATLAB codes for performing different Linear Control systems functions , and applications
MATLAB
1
star
23

PAK-COVID-19-Citywise-and-District-wise-analysis

A python based script in which PAK-COVID-19 dataset from Kaggle is used for Records Visualization and preprocessing of Data..
Jupyter Notebook
1
star
24

TensorRT-with-PyTorch

Complete code files, with proper documentation on how to use PyTorch Models with TensorRT optimization for PCs, Jetson TX2 and Jetson Nano
Python
1
star