Rethinking Interpretation: Input-Agnostic Saliency Mapping of Deep Visual Classifiers
Rethinking Interpretation: Input-Agnostic Saliency Mapping of Deep Visual Classifiers
Saliency methods provide post-hoc model interpretation by attributing input features to the model outputs. Current methods mainly achieve this using a single input sample, thereby failing to answer input-independent inquiries about the model. We also show that input-specific saliency mapping is intrinsically susceptible to misleading feature attribution. Current attempts to …