Ask a Question

Prefer a chat interface with context about you and your work?

Aligning Visual and Semantic Interpretability through Visually Grounded Concept Bottleneck Models

Aligning Visual and Semantic Interpretability through Visually Grounded Concept Bottleneck Models

The performance of neural networks increases steadily, but our understanding of their decision-making lags behind. Concept Bottleneck Models (CBMs) address this issue by incorporating human-understandable concepts into the prediction process, thereby enhancing transparency and interpretability. Since existing approaches often rely on large language models (LLMs) to infer concepts, their results …