BendVLM: Test-Time Debiasing of Vision-Language Embeddings
BendVLM: Test-Time Debiasing of Vision-Language Embeddings
Vision-language model (VLM) embeddings have been shown to encode biases present in their training data, such as societal biases that prescribe negative characteristics to members of various racial and gender identities. VLMs are being quickly adopted for a variety of tasks ranging from few-shot classification to text-guided image generation, making …