Christopher J. Anders

Follow

Generating author description...

All published works
Action Title Year Authors
+ PDF Chat Physics-Informed Bayesian Optimization of Variational Quantum Circuits 2024 Kim A. Nicoli
Christopher J. Anders
Lena Funcke
Tobias Hartung
Karl Jansen
Stefan KĂŒhn
Klaus‐Robert MĂŒller
Paolo Stornati
Pan Kessel
Shinichi Nakajima
+ PDF Chat From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space 2024 Maximilian Dreyer
Frederik Pahde
Christopher J. Anders
Wojciech Samek
Sebastian Lapuschkin
+ PDF Chat Detecting and mitigating mode-collapse for flow-based sampling of lattice field theories 2023 Kim A. Nicoli
Christopher J. Anders
Tobias Hartung
Karl Jansen
Pan Kessel
Shinichi Nakajima
+ PDF Chat Towards Fixing Clever-Hans Predictors with Counterfactual Knowledge Distillation 2023 Sidney Bender
Christopher J. Anders
Pattarawat Chormai
Heike Marxfeld
Jan Herrmann
Grégoire Montavon
+ Detecting and Mitigating Mode-Collapse for Flow-based Sampling of Lattice Field Theories 2023 Kim A. Nicoli
Christopher J. Anders
Tobias Hartung
Karl Jansen
Pan Kessel
Shinichi Nakajima
+ From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space 2023 Maximilian Dreyer
Frederik Pahde
Christopher J. Anders
Wojciech Samek
Sebastian Lapuschkin
+ Towards Fixing Clever-Hans Predictors with Counterfactual Knowledge Distillation 2023 Sidney Bender
Christopher J. Anders
Pattarawatt Chormai
Heike Marxfeld
Jan Herrmann
Grégoire Montavon
+ PDF Chat Machine Learning of Thermodynamic Observables in the Presence of Mode Collapse 2022 Kim A. Nicoli
Christopher J. Anders
Lena Funcke
Tobias Hartung
Karl Jansen
Pan Kessel
Shinichi Nakajima
Paolo Stornati
+ PatClArC: Using Pattern Concept Activation Vectors for Noise-Robust Model Debugging 2022 Frederik Pahde
Leander Weber
Christopher J. Anders
Wojciech Samek
Sebastian Lapuschkin
+ PDF Chat Machine Learning of Thermodynamic Observables in the Presence of Mode Collapse 2021 Kim A. Nicoli
Christopher J. Anders
Lena Funcke
Tobias Hartung
Karl Jansen
Pan Kessel
Shinichi Nakajima
Paolo Stornati
+ Finding and removing Clever Hans: Using explanation methods to debug and improve deep models 2021 Christopher J. Anders
Leander Weber
David Neumann
Wojciech Samek
Klaus‐Robert MĂŒller
Sebastian Lapuschkin
+ Towards robust explanations for deep neural networks 2021 Ann-Kathrin Dombrowski
Christopher J. Anders
Klaus‐Robert MĂŒller
Pan Kessel
+ Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy. 2021 Christopher J. Anders
David Neumann
Wojciech Samek
Klaus‐Robert MĂŒller
Sebastian Lapuschkin
+ PDF Chat Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications 2021 Wojciech Samek
Grégoire Montavon
Sebastian Lapuschkin
Christopher J. Anders
Klaus‐Robert MĂŒller
+ PDF Chat Estimation of Thermodynamic Observables in Lattice Field Theories with Deep Generative Models 2021 Kim A. Nicoli
Christopher J. Anders
Lena Funcke
Tobias Hartung
Karl Jansen
Pan Kessel
Shinichi Nakajima
Paolo Stornati
+ Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy 2021 Christopher J. Anders
David Neumann
Wojciech Samek
Klaus‐Robert MĂŒller
Sebastian Lapuschkin
+ Fairwashing Explanations with Off-Manifold Detergent 2020 Christopher J. Anders
Plamen Pasliev
Ann-Kathrin Dombrowski
Klaus‐Robert MĂŒller
Pan Kessel
+ Toward Interpretable Machine Learning: Transparent Deep Neural Networks and Beyond 2020 Wojciech Samek
Grégoire Montavon
Sebastian Lapuschkin
Christopher J. Anders
Klaus‐Robert MĂŒller
+ Fairwashing Explanations with Off-Manifold Detergent 2020 Christopher J. Anders
Plamen Plasiliev
Ann-Kathrin Dombrowski
Klaus‐Robert Mueller
Pan Kessel
+ Towards Robust Explanations for Deep Neural Networks 2020 Ann-Kathrin Dombrowski
Christopher J. Anders
Klaus‐Robert MĂŒller
Pan Kessel
+ Analyzing ImageNet with Spectral Relevance Analysis: Towards ImageNet un-Hans'ed. 2019 Christopher J. Anders
Talmaj Marinč
David Neumann
Wojciech Samek
Klaus‐Robert MĂŒller
Sebastian Lapuschkin
+ Explanations can be manipulated and geometry is to blame 2019 Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
Marcel R. Ackermann
Klaus‐Robert MĂŒller
Pan Kessel
+ Finding and Removing Clever Hans: Using Explanation Methods to Debug and Improve Deep Models 2019 Christopher J. Anders
Leander Weber
David Neumann
Wojciech Samek
Klaus‐Robert MĂŒller
Sebastian Lapuschkin
+ Understanding Patch-Based Learning by Explaining Predictions 2018 Christopher J. Anders
Grégoire Montavon
Wojciech Samek
Klaus‐Robert MĂŒller
+ Understanding Patch-Based Learning by Explaining Predictions 2018 Christopher J. Anders
Grégoire Montavon
Wojciech Samek
Klaus‐Robert MĂŒller
Common Coauthors
Commonly Cited References
Action Title Year Authors # of times referenced
+ PDF Chat Unmasking Clever Hans predictors and assessing what machines really learn 2019 Sebastian Lapuschkin
Stephan WĂ€ldchen
Alexander Binder
Grégoire Montavon
Wojciech Samek
Klaus‐Robert MĂŒller
9
+ Explaining nonlinear classification decisions with deep Taylor decomposition 2016 Grégoire Montavon
Sebastian Lapuschkin
Alexander Binder
Wojciech Samek
Klaus‐Robert MĂŒller
8
+ PDF Chat Evaluating the Visualization of What a Deep Neural Network Has Learned 2016 Wojciech Samek
Alexander Binder
Grégoire Montavon
Sebastian Lapuschkin
Klaus‐Robert MĂŒller
8
+ Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) 2017 Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
Fernanda Viégas
Rory Sayres
7
+ PDF Chat Interpretable Explanations of Black Boxes by Meaningful Perturbation 2017 Ruth Fong
Andrea Vedaldi
7
+ Methods for interpreting and understanding deep neural networks 2017 Grégoire Montavon
Wojciech Samek
Klaus‐Robert MĂŒller
7
+ SmoothGrad: removing noise by adding noise 2017 Daniel Smilkov
Nikhil Thorat
Been Kim
Fernanda Viégas
Martin Wattenberg
7
+ Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps 2013 Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
7
+ PDF Chat Deep Residual Learning for Image Recognition 2016 Kaiming He
Xiangyu Zhang
Shaoqing Ren
Jian Sun
7
+ PDF Chat ImageNet Large Scale Visual Recognition Challenge 2015 Olga Russakovsky
Jia Deng
Hao Su
Jonathan Krause
Sanjeev Satheesh
Sean Ma
Zhiheng Huang
Andrej Karpathy
Aditya Khosla
Michael S. Bernstein
6
+ PDF Chat Analyzing Classifiers: Fisher Vectors and Deep Neural Networks 2016 Sebastian Lapuschkin
Alexander Binder
Grégoire Montavon
Klaus‐Robert MĂŒller
Wojciech Samek
6
+ Learning Important Features Through Propagating Activation Differences 2017 Avanti Shrikumar
Peyton Greenside
Anshul Kundaje
6
+ PDF Chat Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization 2017 Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
6
+ Axiomatic Attribution for Deep Networks 2017 Mukund Sundararajan
Ankur Taly
Qiqi Yan
6
+ Striving for Simplicity: The All Convolutional Net 2014 Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
6
+ PDF Chat Quantum-chemical insights from deep tensor neural networks 2017 Kristof T. SchĂŒtt
Farhad Arbabzadah
Stefan Chmiela
K. MĂŒller
Alexandre Tkatchenko
6
+ PDF Chat Towards Best Practice in Explaining Neural Network Decisions with LRP 2020 Maximilian Kohlbrenner
Alexander Bauer
Shinichi Nakajima
Alexander Binder
Wojciech Samek
Sebastian Lapuschkin
5
+ How to Explain Individual Classification Decisions 2009 David Baehrens
Timon Schroeter
Stefan Harmeling
Motoaki Kawanabe
Katja Hansen
Klaus‐Robert MĂŒller
5
+ Learning how to explain neural networks: PatternNet and PatternAttribution 2017 Pieter Jan Kindermans
Kristof T. SchĂŒtt
Maximilian Alber
K. MĂŒller
Dumitru Erhan
Been Kim
Sven DĂ€hne
5
+ PDF Chat A Unifying Review of Deep and Shallow Anomaly Detection 2021 Lukas Ruff
Jacob R. Kauffmann
Robert A. Vandermeulen
Grégoire Montavon
Wojciech Samek
Marius Kloft
Thomas G. Dietterich
Klaus‐Robert MĂŒller
5
+ PDF Chat Resolving challenges in deep learning-based analyses of histopathological images using explanation methods 2020 Miriam HĂ€gele
Philipp Seegerer
Sebastian Lapuschkin
Michael Bockmayr
Wojciech Samek
Frederick Klauschen
Klaus‐Robert MĂŒller
Alexander Binder
5
+ Very Deep Convolutional Networks for Large-Scale Image Recognition 2014 Karen Simonyan
Andrew Zisserman
5
+ Towards explaining anomalies: A deep Taylor decomposition of one-class models 2020 Jacob R. Kauffmann
Klaus‐Robert MĂŒller
Grégoire Montavon
4
+ Very Deep Convolutional Networks for Large-Scale Image Recognition 2014 Karen Simonyan
Andrew Zisserman
4
+ PDF Chat Building and Interpreting Deep Similarity Models 2020 Oliver Eberle
Jochen BĂŒttner
Florian KrÀutli
Klaus‐Robert MĂŒller
Matteo Valleriani
Grégoire Montavon
4
+ A Benchmark for Interpretability Methods in Deep Neural Networks 2018 Sara Hooker
Dumitru Erhan
Pieter-Jan Kindermans
Been Kim
4
+ PDF Chat Towards Evaluating the Robustness of Neural Networks 2017 Nicholas Carlini
David Wagner
4
+ iNNvestigate Neural Networks 2019 Maximilian Alber
Sebastian Lapuschkin
Philipp Seegerer
Miriam HĂ€gele
Kristof T. SchĂŒtt
Grégoire Montavon
Wojciech Samek
K. MĂŒller
Sven DĂ€hne
Pieter Jan Kindermans
4
+ PDF Chat From Clustering to Cluster Explanations via Neural Networks 2022 Jacob R. Kauffmann
Malte Esders
Lukas Ruff
Grégoire Montavon
Wojciech Samek
Klaus‐Robert MĂŒller
4
+ PDF Chat "What is relevant in a text document?": An interpretable machine learning approach 2017 Leila Arras
Franziska Horn
Grégoire Montavon
Klaus‐Robert MĂŒller
Wojciech Samek
4
+ Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations 2017 Andrew Slavin Ross
Michael C. Hughes
Finale Doshi‐Velez
4
+ PDF Chat Equivariant Flow-Based Sampling for Lattice Gauge Theory 2020 Gurtej Kanwar
Michael S. Albergo
Denis Boyda
K. Cranmer
Daniel C. Hackett
SĂ©bastien RacaniĂšre
Danilo Jimenez Rezende
Phiala E. Shanahan
3
+ Fairwashing Explanations with Off-Manifold Detergent 2020 Christopher J. Anders
Plamen Pasliev
Ann-Kathrin Dombrowski
Klaus‐Robert MĂŒller
Pan Kessel
3
+ PDF Chat Making deep neural networks right for the right scientific reasons by interacting with their explanations 2020 Patrick Schramowski
Wolfgang Stammer
Stefano Teso
Anna Brugger
Franziska Herbert
Xiaoting Shao
Hans-Georg Luigs
Anne‐Katrin Mahlein
Kristian Kersting
3
+ PDF Chat Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning 2019 Frank Noé
Simon Olsson
Jonas Köhler
Hao Wu
3
+ PDF Chat Understanding Deep Networks via Extremal Perturbations and Smooth Masks 2019 Ruth Fong
Mandela Patrick
Andrea Vedaldi
3
+ PDF Chat Batch Effect Confounding Leads to Strong Bias in Performance Estimates Obtained by Cross-Validation 2014 Charlotte Soneson
Sarah Gerster
Mauro Delorenzi
3
+ PDF Chat Learning Deep Features for Discriminative Localization 2016 Bolei Zhou
Aditya Khosla
Àgata Lapedriza
Aude Oliva
Antonio Torralba
3
+ Explaining Recurrent Neural Network Predictions in Sentiment Analysis 2017 Leila Arras
Grégoire Montavon
Klaus‐Robert MĂŒller
Wojciech Samek
3
+ PDF Chat Asymptotically unbiased estimation of physical observables with neural samplers 2020 Kim A. Nicoli
Shinichi Nakajima
Nils Strodthoff
Wojciech Samek
Klaus‐Robert MĂŒller
Pan Kessel
3
+ PDF Chat Classifying and segmenting microscopy images with deep multiple instance learning 2016 Oren Kraus
Jimmy Ba
Brendan J. Frey
3
+ A Unified Approach to Interpreting Model Predictions 2017 Scott Lundberg
Su‐In Lee
3
+ Real Time Image Saliency for Black Box Classifiers 2017 Piotr Dabkowski
Yarin Gal
3
+ On the interpretation of weight vectors of linear models in multivariate neuroimaging 2013 Stefan Haufe
Frank C. Meinecke
Kai Görgen
Sven DĂ€hne
John­–Dylan Haynes
Benjamin Blankertz
Felix Bießmann
3
+ Explanations can be manipulated and geometry is to blame 2019 Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
Marcel R. Ackermann
Klaus‐Robert MĂŒller
Pan Kessel
3
+ PDF Chat Interpretable deep neural networks for single-trial EEG classification 2016 Irene Sturm
Sebastian Lapuschkin
Wojciech Samek
Klaus‐Robert MĂŒller
3
+ Graying the black box: Understanding DQNs 2016 Tom Zahavy
Nir Ben Zrihem
Shie Mannor
3
+ PDF Chat A Survey of Methods for Explaining Black Box Models 2018 Riccardo Guidotti
Anna Monreale
Salvatore Ruggieri
Franco Turini
Fosca Giannotti
Dino Pedreschi
3
+ PDF Chat European Union Regulations on Algorithmic Decision Making and a “Right to Explanation” 2017 Bryce Goodman
Seth Flaxman
3
+ PDF Chat Top-Down Neural Attention by Excitation Backprop 2017 Jianming Zhang
Sarah Adel Bargal
Zhe Lin
Jonathan Brandt
Xiaohui Shen
Stan Sclaroff
3