Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations

Type: Preprint

Publication Date: 2022-01-01

Citations: 0

DOI: https://doi.org/10.48550/arxiv.2211.12486

Locations

  • arXiv (Cornell University) - View - PDF
  • DataCite API - View

Similar Works

Action Title Year Authors
+ PDF Chat Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations 2023 Alexander Binder
Leander Weber
Sebastian Lapuschkin
Grégoire Montavon
Klaus‐Robert MĂŒller
Wojciech Samek
+ A Note about: Local Explanation Methods for Deep Neural Networks lack Sensitivity to Parameter Values 2018 Mukund Sundararajan
Ankur Taly
+ Causal Analysis for Robust Interpretability of Neural Networks 2023 Ola Ahmad
Nicolas BĂ©reux
Vahid Hashemi
Freddy Lécué
+ PDF Chat Causal Analysis for Robust Interpretability of Neural Networks 2024 Ola Ahmad
Nicolas BĂ©reux
LoĂŻc Baret
Vahid Hashemi
Freddy Lécué
+ PDF Chat Robust Explainability: A tutorial on gradient-based attribution methods for deep neural networks 2022 Ian E. Nielsen
Dimah Dera
Ghulam Rasool
Ravi P. Ramachandran
Nidhal Bouaynaya
+ Precise Benchmarking of Explainable AI Attribution Methods 2023 Rafaël Brandt
Daan Raatjens
Georgi Gaydadjiev
+ Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey 2019 Vanessa Buhrmester
David MĂŒnch
Michael Arens
+ PDF Chat Normalized AOPC: Fixing Misleading Faithfulness Metrics for Feature Attribution Explainability 2024 Joakim Edin
Andreas Geert Motzfeldt
Casper L. Christensen
Tuukka Ruotsalo
Lars MaalĂže
Maria Maistro
+ Better Understanding Differences in Attribution Methods via Systematic Evaluations 2023 Sukrut Rao
Moritz Böhle
Bernt Schiele
+ PDF Chat Measurably Stronger Explanation Reliability Via Model Canonization 2022 Franz Motzkus
Leander Weber
Sebastian Lapuschkin
+ Measurably Stronger Explanation Reliability via Model Canonization 2022 Franz Motzkus
Leander Weber
Sebastian Lapuschkin
+ PDF Chat Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey 2021 Vanessa Buhrmester
David MĂŒnch
Michael Arens
+ PDF Chat Corrupting Neuron Explanations of Deep Visual Features 2023 Divyansh Srivastava
Tuomas Oikarinen
Tsui-Wei Weng
+ Corrupting Neuron Explanations of Deep Visual Features 2023 Divyansh Srivastava
Tuomas Oikarinen
Tsui-Wei Weng
+ On Gradient-like Explanation under a Black-box Setting: When Black-box Explanations Become as Good as White-box 2023 Yi Cai
Gerhard Wunder
+ PDF Chat Benchmarking the Attribution Quality of Vision Models 2024 Robin Hesse
Simone Schaub-Meyer
S. Roth
+ Assessing the Reliability of Visual Explanations of Deep Models with Adversarial Perturbations 2020 Dan Valle
Tiago Pimentel
Adriano Veloso
+ PDF Chat Assessing the Reliability of Visual Explanations of Deep Models with Adversarial Perturbations 2020 Dan Valle
Tiago Pimentel
Adriano Veloso
+ PDF Chat Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation 2021 Sam Sattarzadeh
Mahesh Sudhakar
Anthony Lem
Shervin Mehryar
Konstantinos N. Plataniotis
Jongseong Jang
Hyunwoo Kim
Yeonjeong Jeong
Sangmin Lee
Kyung‐Hoon Bae
+ PDF Chat Towards Better Understanding Attribution Methods 2022 Sukrut Rao
Moritz Böhle
Bernt Schiele

Works That Cite This (0)

Action Title Year Authors

Works Cited by This (0)

Action Title Year Authors