Ask a Question

Prefer a chat interface with context about you and your work?

SenCLIP: Enhancing zero-shot land-use mapping for Sentinel-2 with ground-level prompting

SenCLIP: Enhancing zero-shot land-use mapping for Sentinel-2 with ground-level prompting

Pre-trained vision-language models (VLMs), such as CLIP, demonstrate impressive zero-shot classification capabilities with free-form prompts and even show some generalization in specialized domains. However, their performance on satellite imagery is limited due to the underrepresentation of such data in their training sets, which predominantly consist of ground-level images. Existing prompting …