Ask a Question

Prefer a chat interface with context about you and your work?

Multi-Class Textual-Inversion Secretly Yields a Semantic-Agnostic Classifier

Multi-Class Textual-Inversion Secretly Yields a Semantic-Agnostic Classifier

With the advent of large pre-trained vision-language models such as CLIP, prompt learning methods aim to enhance the transferability of the CLIP model. They learn the prompt given few samples from the downstream task given the specific class names as prior knowledge, which we term as semantic-aware classification. However, in …