Select and Distill: Selective Dual-Teacher Knowledge Transfer for
Continual Learning on Vision-Language Models
Select and Distill: Selective Dual-Teacher Knowledge Transfer for
Continual Learning on Vision-Language Models
Large-scale vision-language models (VLMs) have shown a strong zero-shot generalization capability on unseen-domain data. However, when adapting pre-trained VLMs to a sequence of downstream tasks, they are prone to forgetting previously learned knowledge and degrade their zero-shot classification capability. To tackle this problem, we propose a unique Selective Dual-Teacher Knowledge …