Ask a Question

Prefer a chat interface with context about you and your work?

BLIP3-KALE: Knowledge Augmented Large-Scale Dense Captions

BLIP3-KALE: Knowledge Augmented Large-Scale Dense Captions

We introduce BLIP3-KALE, a dataset of 218 million image-text pairs that bridges the gap between descriptive synthetic captions and factual web-scale alt-text. KALE augments synthetic dense image captions with web-scale alt-text to generate factually grounded image captions. Our two-stage approach leverages large vision-language models and language models to create knowledge-augmented …