Choosing Wisely and Learning Deeply: Selective Cross-Modality Distillation via CLIP for Domain Generalization

Type: Preprint

Publication Date: 2023-01-01

Citations: 0

DOI: https://doi.org/10.48550/arxiv.2311.15145

Locations

  • arXiv (Cornell University) - View - PDF
  • DataCite API - View

Similar Works

Action Title Year Authors
+ PDF Chat A Sentence Speaks a Thousand Images: Domain Generalization through Distilling CLIP with Language Guidance 2023 Zeyi Huang
Andy Zhou
Zijian Lin
Mu Cai
Haohan Wang
Yong Jae Lee
+ A Sentence Speaks a Thousand Images: Domain Generalization through Distilling CLIP with Language Guidance 2023 Zeyi Huang
Andy Zhou
Zijian Lin
Mu Cai
Haohan Wang
Yong Jae Lee
+ PDF Chat PracticalDG: Perturbation Distillation on Vision-Language Models for Hybrid Domain Generalization 2024 Zining Chen
Weiqiu Wang
Zhicheng Zhao
Fei Su
Aidong Men
Hongying Meng
+ Distilling from Vision-Language Models for Improved OOD Generalization in Vision Tasks 2023 Sravanti Addepalli
Ashish Ramayee Asokan
Lakshay Sharma
R. Venkatesh Babu
+ PDF Chat CLIP the Divergence: Language-guided Unsupervised Domain Adaptation 2024 Jinjing Zhu
Yucheng Chen
Lin Wang
+ PDF Chat Generalizing CLIP to Unseen Domain via Text-Guided Diverse Novel Feature Synthesis 2024 Siyuan Yan
Cheng Luo
Zhen Yu
Zongyuan Ge
+ PDF Chat PromptSync: Bridging Domain Gaps in Vision-Language Models through Class-Aware Prototype Alignment and Discrimination 2024 Anant Khandelwal
+ PDF Chat Unknown Prompt, the only Lacuna: Unveiling CLIP's Potential for Open Domain Generalization 2024 Mainak Singha
Ankit Jha
Shirsha Bose
Ashwin Nair
Moloud Abdar
Biplab Banerjee
+ PDF Chat CLIP-CID: Efficient CLIP Distillation via Cluster-Instance Discrimination 2024 Kaicheng Yang
T. Gu
Xiang An
Haiqiang Jiang
Xiangzi Dai
Ziyong Feng
Weidong Cai
Jiankang Deng
+ SYNC-CLIP: Synthetic Data Make CLIP Generalize Better in Data-Limited Scenarios 2023 Mushui Liu
Weijie He
Ziqian Lu
Yunlong Yu
+ PDF Chat Learning to Diversify for Single Domain Generalization 2021 Zijian Wang
Yadan Luo
Ruihong Qiu
Zi Huang
Mahsa Baktashmotlagh
+ TinyCLIP: CLIP Distillation via Affinity Mimicking and Weight Inheritance 2023 Kan Wu
Houwen Peng
Zhenghong Zhou
Bin Xiao
Mengchen Liu
Lu Yuan
Hong Xuan
Michael Valenzuela
Xi
Chen
+ Leveraging Normalization Layer in Adapters With Progressive Learning and Adaptive Distillation for Cross-Domain Few-Shot Learning 2023 Yongjin Yang
Taehyeon Kim
Se-Young Yun
+ StyLIP: Multi-Scale Style-Conditioned Prompt Learning for CLIP-based Domain Generalization 2023 Shirsha Bose
Enrico Fini
Ankit Jha
Mainak Singha
Biplab Banerjee
Elisa Ricci
+ PDF Chat StyLIP: Multi-Scale Style-Conditioned Prompt Learning for CLIP-based Domain Generalization 2024 Shirsha Bose
Ankit Jha
Enrico Fini
Mainak Singha
Elisa Ricci
Biplab Banerjee
+ PDF Chat Distilling Vision-Language Foundation Models: A Data-Free Approach via Prompt Diversification 2024 Yunyi Xuan
Weijie Chen
Shicai Yang
Di Xie
Luojun Lin
Yueting Zhuang
+ Distilling Large Vision-Language Model with Out-of-Distribution Generalizability 2023 Xuanlin Li
Yunhao Fang
Minghua Liu
Zhan Ling
Zhuowen Tu
Hao Su
+ Modality-specific Distillation 2021 Woojeong Jin
Maziar Sanjabi
Shaoliang Nie
Liang Tan
Xiang Ren
Hamed Firooz
+ PDF Chat Leveraging Normalization Layer in Adapters with Progressive Learning and Adaptive Distillation for Cross-Domain Few-Shot Learning 2024 Yongjin Yang
Taehyeon Kim
Se-Young Yun
+ PDF Chat Domain Aligned CLIP for Few-shot Classification 2024 Muhammad Waleed Gondal
Jochen Gast
Inigo Alonso Ruiz
Richard Droste
Tommaso Macrì
Suren Kumar
Luitpold Staudigl

Works That Cite This (0)

Action Title Year Authors

Works Cited by This (0)

Action Title Year Authors