Ask a Question

Prefer a chat interface with context about you and your work?

Distilling Vision-Language Foundation Models: A Data-Free Approach via Prompt Diversification

Distilling Vision-Language Foundation Models: A Data-Free Approach via Prompt Diversification

Data-Free Knowledge Distillation (DFKD) has shown great potential in creating a compact student model while alleviating the dependency on real training data by synthesizing surrogate data. However, prior arts are seldom discussed under distribution shifts, which may be vulnerable in real-world applications. Recent Vision-Language Foundation Models, e.g., CLIP, have demonstrated …