Towards better understanding of gradient-based attribution methods for Deep Neural Networks

Type: Preprint

Publication Date: 2017-01-01

Citations: 288

DOI: https://doi.org/10.48550/arxiv.1711.06104

Locations

  • arXiv (Cornell University) - View
  • Zurich Open Repository and Archive (University of Zurich) - View - PDF
  • Repository for Publications and Research Data (ETH Zurich) - View - PDF
  • DataCite API - View

Similar Works

Action Title Year Authors
+ Towards better understanding of gradient-based attribution methods for Deep Neural Networks 2017 Marco Ancona
Enea Ceolini
Cengiz Öztireli
Markus Groß
+ A unified view of gradient-based attribution methods for Deep Neural Networks 2017 Marco Ancona
Enea Ceolini
Cengiz Öztireli
Markus Groß
+ PDF Chat Robust Explainability: A tutorial on gradient-based attribution methods for deep neural networks 2022 Ian E. Nielsen
Dimah Dera
Ghulam Rasool
Ravi P. Ramachandran
Nidhal Bouaynaya
+ Four Axiomatic Characterizations of the Integrated Gradients Attribution Method 2023 Daniel Lundström
Meisam Razaviyayn
+ Axiomatic Attribution for Deep Networks 2017 Mukund Sundararajan
Ankur Taly
Qiqi Yan
+ Axiomatic Attribution for Deep Networks 2017 Mukund Sundararajan
Ankur Taly
Qiqi Yan
+ PDF Chat Towards More Robust Interpretation via Local Gradient Alignment 2023 Sunghwan Joo
SeokHyeon Jeong
Juyeon Heo
Adrian Weller
Taesup Moon
+ On Gradient-like Explanation under a Black-box Setting: When Black-box Explanations Become as Good as White-box 2023 Yi Cai
Gerhard Wunder
+ A Rigorous Study of Integrated Gradients Method and Extensions to Internal Neuron Attributions 2022 Daniel Lundström
Tianjian Huang
Meisam Razaviyayn
+ Attribution Mask: Filtering Out Irrelevant Features By Recursively Focusing Attention on Inputs of DNNs 2021 Jaehong Lee
Joon-Hyuk Chang
+ Negative Flux Aggregation to Estimate Feature Attributions 2023 Xin Li
Deng Pan
Chengyin Li
Qiang Yao
Dongxiao Zhu
+ Negative Flux Aggregation to Estimate Feature Attributions 2023 Xin Li
Deng Pan
Chengyin Li
Qiang Yao
Dongxiao Zhu
+ PDF Chat Toward Understanding the Disagreement Problem in Neural Network Feature Attribution 2024 Niklas Koenen
Marvin Wright
+ PDF Chat Reliable Evaluation of Attribution Maps in CNNs: A Perturbation-Based Approach 2024 Lars Nieradzik
Henrike Stephani
Janis Keuper
+ A-FMI: Learning Attributions from Deep Networks via Feature Map Importance 2021 An Zhang
Xiang Wang
Chengfang Fang
Jie Shi
Xiangnan He
Tat‐Seng Chua
Zehua Chen
+ Towards More Robust Interpretation via Local Gradient Alignment 2022 Sunghwan Joo
Seokhyeon Jeong
Juyeon Heo
Adrian Weller
Taesup Moon
+ MFABA: A More Faithful and Accelerated Boundary-based Attribution Method for Deep Neural Networks 2023 Zhiyu Zhu
Huaming Chen
J. Z. Zhang
Xinyi Wang
Zhibo Jin
Minhui Xue
Dongxiao Zhu
Kim‐Kwang Raymond Choo
+ Investigating Saturation Effects in Integrated Gradients. 2020 Vivek Miglani
Narine Kokhlikyan
Bilal Alsallakh
Miguel Vargas Martín
Orion Reblitz-Richardson
+ Restricting the Flow: Information Bottlenecks for Attribution 2020 Karl Schulz
Leon Sixt
Federico Tombari
Tim Landgraf
+ Restricting the Flow: Information Bottlenecks for Attribution 2020 Karl Schulz
Leon Sixt
Federico Tombari
Tim Landgraf

Works That Cite This (119)

Action Title Year Authors
+ PDF Chat Deep learning with photosensor timing information as a background rejection method for the Cherenkov Telescope Array 2021 S. Spencer
T. P. Armstrong
J. R. Watson
S. Mangano
Y. Renier
G. Cotter
+ PDF Chat Improvement of variables interpretability in kernel PCA 2023 Mitja Briscik
Marie‐Agnès Dillies
Sébastien Dejean
+ PDF Chat Illuminati: Towards Explaining Graph Neural Networks for Cybersecurity Analysis 2022 Haoyu He
Yuede Ji
H. Howie Huang
+ PDF Chat TSGB: Target-Selective Gradient Backprop for Probing CNN Visual Saliency 2022 Lin Cheng
Pengfei Fang
Yanjie Liang
Liao Zhang
Chunhua Shen
Hanzi Wang
+ PDF Chat Right for the Right Concept: Revising Neuro-Symbolic Concepts by Interacting with their Explanations 2021 Wolfgang Stammer
Patrick Schramowski
Kristian Kersting
+ PDF Chat AdapLeR: Speeding up Inference by Adaptive Length Reduction 2022 Ali Modarressi
Hosein Mohebbi
Mohammad Taher Pilehvar
+ Does Your Model Think Like an Engineer? Explainable AI for Bearing Fault Detection with Deep Learning 2023 Thomas Decker
Michael Lebacher
Volker Tresp
+ Feature perturbation augmentation for reliable evaluation of importance estimators in neural networks 2023 Lennart Brocki
Neo Christopher Chung
+ CLEVR-XAI: A benchmark dataset for the ground truth evaluation of neural network explanations 2021 Leila Arras
Ahmed Osman
Wojciech Samek
+ PDF Chat A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When? 2023 Subrato Bharati
M. Rubaiyat Hossain Mondal
Prajoy Podder

Works Cited by This (0)

Action Title Year Authors