Ask a Question

Prefer a chat interface with context about you and your work?

Rethinking Visual Prompting for Multimodal Large Language Models with External Knowledge

Rethinking Visual Prompting for Multimodal Large Language Models with External Knowledge

In recent years, multimodal large language models (MLLMs) have made significant strides by training on vast high-quality image-text datasets, enabling them to generally understand images well. However, the inherent difficulty in explicitly conveying fine-grained or spatially dense information in text, such as masks, poses a challenge for MLLMs, limiting their …