XRAI: Better Attributions Through Regions

Type: Article

Publication Date: 2019-10-01

Citations: 158

DOI: https://doi.org/10.1109/iccv.2019.00505

Abstract

Saliency methods can aid understanding of deep neural networks. Recent years have witnessed many improvements to saliency methods, as well as new ways for evaluating them. In this paper, we 1) present a novel region-based attribution method, XRAI, that builds upon integrated gradients (Sundararajan et al. 2017), 2) introduce evaluation methods for empirically assessing the quality of image-based saliency maps (Performance Information Curves (PICs)), and 3) contribute an axiom-based sanity check for attribution methods. Through empirical experiments and example results, we show that XRAI produces better results than other saliency methods for common models and the ImageNet dataset.

Locations

  • arXiv (Cornell University) - View - PDF
  • 2021 IEEE/CVF International Conference on Computer Vision (ICCV) - View

Similar Works

Action Title Year Authors
+ XRAI: Better Attributions Through Regions 2019 Andrei Kapishnikov
Tolga Bolukbasi
Fernanda Viégas
Michael Terry
+ XRAI: Better Attributions Through Regions 2019 Andrei Kapishnikov
Tolga Bolukbasi
Fernanda Viégas
Michael Terry
+ PDF Chat COSE: A Consistency-Sensitivity Metric for Saliency on Image Classification 2023 Rangel Daroya
Aaron Sun
Subhransu Maji
+ COSE: A Consistency-Sensitivity Metric for Saliency on Image Classification 2023 Rangel Daroya
Aaron Sun
Subhransu Maji
+ The (Un)reliability of saliency methods 2017 Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
Dumitru Erhan
Been Kim
+ Efficient Saliency Maps for Explainable AI 2019 T. Nathan Mundhenk
Barry Y. Chen
Gerald Friedland
+ PDF Chat Classification Metrics for Image Explanations: Towards Building Reliable XAI-Evaluations 2024 Benjamin Frész
Lena Lörcher
Marco F. Huber
+ A Simple Saliency Method That Passes the Sanity Checks 2019 Arushi Gupta
Sanjeev Arora
+ Believe The HiPe: Hierarchical Perturbation for Fast, Robust, and Model-Agnostic Saliency Mapping 2021 Jessica Cooper
Ognjen Arandjelović
David J. Harrison
+ Sanity Simulations for Saliency Methods. 2021 Joon Sik Kim
Gregory Plumb
Ameet Talwalkar
+ Opti-CAM: Optimizing saliency maps for interpretability 2023 Hanwei Zhang
Felipe Torres
Ronan Sicre
Yannis Avrithis
Stéphane Ayache
+ Opti-Cam: Optimizing Saliency Maps for Interpretability 2023 Hanwei Zhang
Felipe Torres
Ronan Sicre
Yannis Avrithis
Stéphane Ayache
+ Opti-CAM: Optimizing saliency maps for interpretability 2024 Hanwei Zhang
Felipe Torres
Ronan Sicre
Yannis Avrithis
Stéphane Ayache
+ Sanity Simulations for Saliency Methods 2021 Joon Sik Kim
Gregory Plumb
Ameet Talwalkar
+ Believe the HiPe: Hierarchical perturbation for fast, robust, and model-agnostic saliency mapping 2022 Jessica Cooper
Ognjen Arandjelović
David J. Harrison
+ Input Bias in Rectified Gradients and Modified Saliency Maps 2020 Lennart Brocki
Neo Christopher Chung
+ Input Bias in Rectified Gradients and Modified Saliency Maps 2020 Lennart Brocki
Neo Christopher Chung
+ New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound 2022 Arushi Gupta
Nikunj Saunshi
Dingli Yu
Kaifeng Lyu
Sanjeev Arora
+ PDF Chat Input Bias in Rectified Gradients and Modified Saliency Maps 2021 Lennart Brocki
Neo Christopher Chung
+ Quantitative Evaluations on Saliency Methods: An Experimental Study 2020 Xiaohui Li
Yuhan Shi
Haoyang Li
Wei Bai
Yuanwei Song
Caleb Chen Cao
Lei Chen

Works That Cite This (56)

Action Title Year Authors
+ This Changes to That : Combining Causal and Non-Causal Explanations to Generate Disease Progression in Capsule Endoscopy 2023 Anuja Vats
Ahmed Mohammed
Marius Pedersen
Nirmalie Wiratunga
+ Feature perturbation augmentation for reliable evaluation of importance estimators in neural networks 2023 Lennart Brocki
Neo Christopher Chung
+ PDF Chat Guided Integrated Gradients: an Adaptive Path Method for Removing Noise 2021 Andrei Kapishnikov
Subhashini Venugopalan
Besim Avci
Ben Wedin
Michael Terry
Tolga Bolukbasi
+ PDF Chat Visual Interpretable and Explainable Deep Learning Models for Brain Tumor MRI and COVID-19 Chest X-ray Images 2023 Yusuf Brima
Marcellin Atemkeng
+ Attribution in Scale and Space 2020 Shawn Xu
Subhashini Venugopalan
Mukund Sundararajan
+ CAMERAS: Enhanced Resolution And Sanity preserving Class Activation Mapping for image saliency 2021 Mohammad A. A. K. Jalwana
Naveed Akhtar
Mohammed Bennamoun
Ajmal Mian
+ PDF Chat SESS: Saliency Enhancing with Scaling and Sliding 2022 Osman Tursun
Simon Denman
Sridha Sridharan
Clinton Fookes
+ PDF Chat Algorithms to estimate Shapley value feature attributions 2023 Hugh Chen
Ian Covert
Scott Lundberg
Su‐In Lee
+ PDF Chat IG<sup>2</sup>: Integrated Gradient on Iterative Gradient Path for Feature Attribution 2024 Yue Zhuo
Zhiqiang Ge
+ Learning and Evaluating Representations for Deep One-class Classification 2020 Kihyuk Sohn
Chun‐Liang Li
Jinsung Yoon
Minho Jin
Tomas Pfister

Works Cited by This (28)

Action Title Year Authors
+ PDF Chat Going deeper with convolutions 2015 Christian Szegedy
Wei Liu
Yangqing Jia
Pierre Sermanet
Scott Reed
Dragomir Anguelov
Dumitru Erhan
Vincent Vanhoucke
Andrew Rabinovich
+ PDF Chat ImageNet Large Scale Visual Recognition Challenge 2015 Olga Russakovsky
Jia Deng
Hao Su
Jonathan Krause
Sanjeev Satheesh
Sean Ma
Zhiheng Huang
Andrej Karpathy
Aditya Khosla
Michael S. Bernstein
+ Striving for Simplicity: The All Convolutional Net 2014 Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
+ How to Explain Individual Classification Decisions 2009 David Baehrens
Timon Schroeter
Stefan Harmeling
Motoaki Kawanabe
Katja Hansen
Klaus‐Robert Müller
+ PDF Chat Deep Residual Learning for Image Recognition 2016 Kaiming He
Xiangyu Zhang
Shaoqing Ren
Jian Sun
+ Explaining nonlinear classification decisions with deep Taylor decomposition 2016 Grégoire Montavon
Sebastian Lapuschkin
Alexander Binder
Wojciech Samek
Klaus‐Robert Müller
+ Not Just a Black Box: Learning Important Features Through Propagating Activation Differences 2016 Avanti Shrikumar
Peyton Greenside
А.В. Щербина
Anshul Kundaje
+ Visualizing Deep Neural Network Decisions: Prediction Difference Analysis 2017 Luisa Zintgraf
Taco Cohen
Tameem Adel
Max Welling
+ Axiomatic Attribution for Deep Networks 2017 Mukund Sundararajan
Ankur Taly
Qiqi Yan
+ Learning Important Features Through Propagating Activation Differences 2017 Avanti Shrikumar
Peyton Greenside
Anshul Kundaje