MaPLe: Multi-modal Prompt Learning

Type: Article

Publication Date: 2023-06-01

Citations: 240

DOI: https://doi.org/10.1109/cvpr52729.2023.01832

Abstract

Pre-trained vision-language (V-L) models such as CLIP have shown excellent generalization ability to downstream tasks. However, they are sensitive to the choice of input text prompts and require careful selection of prompt templates to perform well. Inspired by the Natural Language Processing (NLP) literature, recent CLIP adaptation approaches learn prompts as the textual inputs to fine-tune CLIP for downstream tasks. We note that using prompting to adapt representations in a single branch of CLIP (language or vision) is sub-optimal since it does not allow the flexibility to dynamically adjust both representation spaces on a downstream task. In this work, we propose Multi-modal Prompt Learning (MaPLe) for both vision and language branches to improve alignment between the vision and language representations. Our design promotes strong coupling between the vision-language prompts to ensure mutual synergy and discourages learning independent uni-modal solutions. Further, we learn separate prompts across different early stages to progressively model the stage-wise feature relationships to allow rich context learning. We evaluate the effectiveness of our approach on three representative tasks of generalization to novel classes, new target datasets and unseen domain shifts. Compared with the state-of-the-art method Co-CoOp, MaPLe exhibits favorable performance and achieves an absolute gain of 3.45% on novel classes and 2.72% on overall harmonic-mean, averaged over 11 diverse image recognition datasets. Our code and pre-trained models are available at https://github.com/muzairkhattak/multimodal-prompt-learning.

Locations

  • arXiv (Cornell University) - View - PDF
  • 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) - View

Similar Works

Action Title Year Authors
+ MaPLe: Multi-modal Prompt Learning 2022 Muhammad Uzair Khattak
Hanoona Rasheed
Muhammad Maaz
Salman Khan
Fahad Shahbaz Khan
+ Multi-Prompt with Depth Partitioned Cross-Modal Learning 2023 Yiqi Wang
Xianda Guo
Zheng Zhu
Yingjie Tian
+ PDF Chat APoLLo : Unified Adapter and Prompt Learning for Vision Language Models 2023 Sanjoy Chowdhury
Sayan Nag
Dinesh Manocha
+ APoLLo: Unified Adapter and Prompt Learning for Vision Language Models 2023 Sanjoy Chowdhury
Sayan Nag
Dinesh Manocha
+ COMMA: Co-Articulated Multi-Modal Learning 2024 Lianyu Hu
Liqing Gao
Zekang Liu
Chiā€Man Pun
Wei Feng
+ PDF Chat COMMA: Co-articulated Multi-Modal Learning 2024 Lianyu Hu
Liqing Gao
Zekang Liu
Chiā€Man Pun
Wei Feng
+ Task-Oriented Multi-Modal Mutual Leaning for Vision-Language Models 2023 Sifan Long
Zhen Zhao
Junkun Yuan
Zichang Tan
Jiangjiang Liu
Luping Zhou
Shengsheng Wang
Jingdong Wang
+ Learning to Prompt with Text Only Supervision for Vision-Language Models 2024 Muhammad Uzair Khattak
Muhammad Ferjad Naeem
Muzammal Naseer
Luc Van Gool
Federico Tombari
+ MuDPT: Multi-modal Deep-symphysis Prompt Tuning for Large Pre-trained Vision-Language Models 2023 Yongzhu Miao
Shasha Li
Jintao Tang
Ting Wang
+ APLe: Token-Wise Adaptive for Multi-Modal Prompt Learning 2024 Guiming Cao
Kaize Shi
Hong Fu
Huaiwen Zhang
Guandong Xu
+ DPL: Decoupled Prompt Learning for Vision-Language Models 2023 Xu Chen
Yuhan Zhu
Guozhen Zhang
Haocheng Shen
Yixuan Liao
Xiaoxin Chen
Gangshan Wu
Limin Wang
+ Learning Domain Invariant Prompt for Vision-Language Models 2022 Cairong Zhao
Yubin Wang
Xinyang Jiang
Yifei Shen
Kaitao Song
Dongsheng Li
Duoqian Miao
+ PDF Chat Generalizable Prompt Tuning for Vision-Language Models 2024 Qian Zhang
+ LAMM: Label Alignment for Multi-Modal Prompt Learning 2023 Jingsheng Gao
Jiacheng Ruan
Suncheng Xiang
Zefang Yu
Ke Ji
Mingye Xie
Ting Liu
Yuzhuo Fu
+ PDF Chat LAMM: Label Alignment for Multi-Modal Prompt Learning 2024 Jingsheng Gao
Jiacheng Ruan
Suncheng Xiang
Zefang Yu
Ke Ji
Mingye Xie
Ting Liu
Yuzhuo Fu
+ PDF Chat Progressive Multi-modal Conditional Prompt Tuning 2024 Xiaoyu Qiu
Hao Feng
Yuechen Wang
Wengang Zhou
Houqiang Li
+ Progressive Multi-modal Conditional Prompt Tuning 2024 Xiaoyu Qiu
Hao Feng
Yuechen Wang
Wengang Zhou
Houqiang Li
+ PDF Chat Learning Domain Invariant Prompt for Vision-Language Models 2024 Cairong Zhao
Yubin Wang
Xinyang Jiang
Yifei Shen
Kaitao Song
Dongsheng Li
Duoqian Miao
+ PDF Chat CoPL: Contextual Prompt Learning for Vision-Language Understanding 2024 Koustava Goswami
Srikrishna Karanam
Prateksha Udhayanan
K J Joseph
B. Srinivasan
+ CoPL: Contextual Prompt Learning for Vision-Language Understanding 2023 Koustava Goswami
Srikrishna Karanam
K J Joseph
Prateksha Udhayanan
B. Srinivasan

Works That Cite This (56)

Action Title Year Authors
+ PDF Chat Multi-modal Attribute Prompting for Vision-Language Models 2024 Xin Liu
Jiamin Wu
Wenfei Yang
Xu Zhou
Tianzhu Zhang
+ PDF Chat Optimizing Mobile-Edge AI-Generated Everything (AIGX) Services by Prompt Engineering: Fundamental, Framework, and Case Study 2023 Yinqiu Liu
Hongyang Du
Dusit Niyato
Jiawen Kang
Shuguang Cui
Xuemin Shen
Ping Zhang
+ PDF Chat CoPL: Contextual Prompt Learning for Vision-Language Understanding 2024 Koustava Goswami
Srikrishna Karanam
Prateksha Udhayanan
K J Joseph
B. Srinivasan
+ PDF Chat VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control 2023 Zi-Yuan Hu
Yanyang Li
Michael R. Lyu
Liwei Wang
+ PDF Chat MMGPL: Multimodal Medical Data Analysis with Graph Prompt Learning 2024 Peng Liang
Songyue Cai
Zongqian Wu
Hui-Fang Shang
Xiaofeng Zhu
Xiaoxiao Li
+ PDF Chat Learning to Prompt Knowledge Transfer for Open-World Continual Learning 2024 Yujie Li
Xin Yang
Hao Wang
Xiangkun Wang
Tianrui Li
+ PDF Chat Do we really need a large number of visual prompts? 2024 Youngeun Kim
Yuhang Li
Abhishek Moitra
Ruokai Yin
Priyadarshini Panda
+ Progressive Multi-modal Conditional Prompt Tuning 2024 Xiaoyu Qiu
Hao Feng
Yuechen Wang
Wengang Zhou
Houqiang Li
+ PDF Chat What Can Human Sketches Do for Object Detection? 2023 Pinaki Nath Chowdhury
Ayan Kumar Bhunia
Aneeshan Sain
Subhadeep Koley
Tao Xiang
Yi-Zhe Song
+ PDF Chat Efficient Multimodal Fusion via Interactive Prompting 2023 Yaowei Li
Ruijie Quan
Linchao Zhu
Yi Yang

Works Cited by This (39)

Action Title Year Authors
+ Fine-Grained Visual Classification of Aircraft 2013 Subhransu Maji
Esa Rahtu
Juho Kannala
Matthew B. Blaschko
Andrea Vedaldi
+ PDF Chat Describing Textures in the Wild 2014 Mircea Cimpoi
Subhransu Maji
Iasonas Kokkinos
Sammy Mohamed
Andrea Vedaldi
+ UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild 2012 Khurram Soomro
Amir Zamir
Mubarak Shah
+ Learning Robust Global Representations by Penalizing Local Predictive Power 2019 Haohan Wang
Songwei Ge
Eric P. Xing
Zachary C. Lipton
+ Do ImageNet Classifiers Generalize to ImageNet? 2019 Benjamin Recht
Rebecca Roelofs
Ludwig Schmidt
Vaishaal Shankar
+ PDF Chat EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification 2019 Patrick Helber
Benjamin Bischke
Andreas Dengel
Damian Borth
+ PDF Chat The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization 2021 Dan Hendrycks
Steven Basart
Norman Mu
Saurav Kadavath
Fengqiu Wang
Evan Dorundo
Rahul Desai
Tyler Zhu
Samyak Parajuli
Mike Guo
+ An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale 2020 Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
Thomas Unterthiner
Mostafa Dehghani
Matthias Minderer
Georg Heigold
Sylvain Gelly
+ Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision 2021 Chao Jia
Yinfei Yang
Ye Xia
Yiā€Ting Chen
Zarana Parekh
Hieu Pham
Quoc V. Le
Yun-Hsuan Sung
Zhen Li
Tom Duerig
+ Learning Transferable Visual Models From Natural Language Supervision 2021 Alec Radford
Jong Wook Kim
Chris Hallacy
Aditya Ramesh
Gabriel Goh
Sandhini Agarwal
Girish Sastry
Amanda Askell
Pamela Mishkin
Jack Clark