Explaining Deep Convolutional Neural Networks via Latent Visual-Semantic Filter Attention
Explaining Deep Convolutional Neural Networks via Latent Visual-Semantic Filter Attention
Interpretability is an important property for visual mod-els as it helps researchers and users understand the in-ternal mechanism of a complex model. However, gener-ating semantic explanations about the learned representation is challenging without direct supervision to produce such explanations. We propose a general framework, La-tent Visual Semantic Explainer (LaViSE), to …