Ask a Question

Prefer a chat interface with context about you and your work?

DOGE: Towards Versatile Visual Document Grounding and Referring

DOGE: Towards Versatile Visual Document Grounding and Referring

In recent years, Multimodal Large Language Models (MLLMs) have increasingly emphasized grounding and referring capabilities to achieve detailed understanding and flexible user interaction. However, in the realm of visual document understanding, these capabilities lag behind due to the scarcity of fine-grained datasets and comprehensive benchmarks. To fill this gap, we …