Multi-Class Textual-Inversion Secretly Yields a Semantic-Agnostic
Classifier
Multi-Class Textual-Inversion Secretly Yields a Semantic-Agnostic
Classifier
With the advent of large pre-trained vision-language models such as CLIP, prompt learning methods aim to enhance the transferability of the CLIP model. They learn the prompt given few samples from the downstream task given the specific class names as prior knowledge, which we term as semantic-aware classification. However, in …