Projects
Reading
People
Chat
SU\G
(đž)
/K·U
Projects
Reading
People
Chat
Sign Up
Light
Dark
System
Erico Tjoa
Follow
Share
Generating author description...
All published works
Action
Title
Year
Authors
+
PDF
Chat
Self reward design with fine-grained interpretability
2023
Erico Tjoa
Cuntai Guan
+
PDF
Chat
Quantifying Explainability of Saliency Methods in Deep Neural Networks With a Synthetic Dataset
2022
Erico Tjoa
Cuntai Guan
+
Improving Deep Neural Network Classification Confidence using Heatmap-based eXplainable AI
2022
Erico Tjoa
Hong Jing Khok
Tushar Chouhan
Cuntai Guan
+
Convolutional Neural Network Interpretability with General Pattern Theory
2021
Erico Tjoa
Cuntai Guan
+
A Modified Convolutional Network for Auto-encoding based on Pattern Theory Growth Function
2021
Erico Tjoa
+
Self Reward Design with Fine-grained Interpretability
2021
Erico Tjoa
Cuntai Guan
+
Two Instances of Interpretable Neural Network for Universal Approximations
2021
Erico Tjoa
Cuntai Guan
+
A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI
2020
Erico Tjoa
Cuntai Guan
+
Quantifying Explainability of Saliency Methods in Deep Neural Networks
2020
Erico Tjoa
Cuntai Guan
+
Generalization on the Enhancement of Layerwise Relevance Interpretability of Deep Neural Network
2020
Erico Tjoa
Cuntai Guan
+
Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset
2020
Erico Tjoa
Cuntai Guan
+
Enhancing the Extraction of Interpretable Information for Ischemic Stroke Imaging from Deep Neural Networks
2019
Erico Tjoa
Heng Guo
Yuhao Lu
Cuntai Guan
Common Coauthors
Coauthor
Papers Together
Cuntai Guan
11
Heng Guo
1
Tushar Chouhan
1
Hong Jing Khok
1
Yuhao Lu
1
Commonly Cited References
Action
Title
Year
Authors
# of times referenced
+
Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization
2016
Ramprasaath R. Selvaraju
Abhishek Das
Ramakrishna Vedantam
Michael Cogswell
Devi Parikh
Dhruv Batra
6
+
PDF
Chat
Evaluating the Visualization of What a Deep Neural Network Has Learned
2016
Wojciech Samek
Alexander Binder
Grégoire Montavon
Sebastian Lapuschkin
KlausâRobert MĂŒller
5
+
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
2017
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
Fernanda Viégas
Rory Sayres
5
+
PDF
Chat
Learning Deep Features for Discriminative Localization
2016
Bolei Zhou
Aditya Khosla
Ăgata Lapedriza
Aude Oliva
Antonio Torralba
5
+
Striving for Simplicity: The All Convolutional Net
2014
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
4
+
A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI
2020
Erico Tjoa
Cuntai Guan
4
+
PDF
Chat
Interpretable Explanations of Black Boxes by Meaningful Perturbation
2017
Ruth Fong
Andrea Vedaldi
4
+
PDF
Chat
Analyzing Neuroimaging Data Through Recurrent Deep Learning Models
2019
Armin W. Thomas
Hauke R. Heekeren
KlausâRobert MĂŒller
Wojciech Samek
3
+
PDF
Chat
Unmasking Clever Hans predictors and assessing what machines really learn
2019
Sebastian Lapuschkin
Stephan WĂ€ldchen
Alexander Binder
Grégoire Montavon
Wojciech Samek
KlausâRobert MĂŒller
3
+
PDF
Chat
Network Dissection: Quantifying Interpretability of Deep Visual Representations
2017
David Bau
Bolei Zhou
Aditya Khosla
Aude Oliva
Antonio Torralba
3
+
PDF
Chat
Testing the Robustness of Attribution Methods for Convolutional Neural Networks in MRI-Based Alzheimerâs Disease Classification
2019
Fabian Eitel
Kerstin Ritter
3
+
PDF
Chat
Generation of Multimodal Justification Using Visual Word Constraint Model for Explainable Computer-Aided Diagnosis
2019
Hyebin Lee
Seong Tae Kim
Yong Man Ro
3
+
PDF
Chat
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019
Alejandro Barredo Arrieta
Natalia DĂaz-RodrĂguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
Alberto Barbado
Salvador GarcĂa
Sergio Gil-LĂłpez
Daniel Molina
Richard Benjamins
3
+
PDF
Chat
Explaining Explanations: An Overview of Interpretability of Machine Learning
2018
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
3
+
PDF
Chat
Autofocus Layer for Semantic Segmentation
2018
Yao Qin
Konstantinos Kamnitsas
Siddharth Ancha
Jay Nanavati
Garrison W. Cottrell
Antonio Criminisi
Aditya Nori
3
+
A Unified Approach to Interpreting Model Predictions
2017
Scott Lundberg
SuâIn Lee
3
+
Brain Biomarker Interpretation in ASD Using Deep Learning and fMRI
2018
Xiaoxiao Li
Nicha C. Dvornek
Juntang Zhuang
Pamela Ventola
James S. Duncan
3
+
Axiomatic Attribution for Deep Networks
2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
3
+
PDF
Chat
Multiple Instance Learning for Heterogeneous Images: Training a CNN for Histopathology
2018
Heather D. Couture
J. S. Marron
Charles M. Perou
Melissa A. Troester
Marc Niethammer
3
+
PDF
Chat
CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison
2019
Jeremy Irvin
Pranav Rajpurkar
Michael Ko
Yifan Yu
Silviana Ciurea-Ilcus
Chris Chute
Henrik Marklund
Behzad Haghgoo
Robyn L. Ball
Katie Shpanskaya
3
+
Learning Important Features Through Propagating Activation Differences
2017
Avanti Shrikumar
Peyton Greenside
Anshul Kundaje
3
+
A Benchmark for Interpretability Methods in Deep Neural Networks
2018
Sara Hooker
Dumitru Erhan
Pieter-Jan Kindermans
Been Kim
3
+
Quantitative Evaluations on Saliency Methods: An Experimental Study
2020
Xiaohui Li
Yuhan Shi
Haoyang Li
Wei Bai
Yuanwei Song
Caleb Chen Cao
Lei Chen
2
+
SmoothGrad: removing noise by adding noise
2017
Daniel Smilkov
Nikhil Thorat
Been Kim
Fernanda Viégas
Martin Wattenberg
2
+
PDF
Chat
Understanding the role of individual units in a deep neural network
2020
David Bau
Jun-Yan Zhu
Hendrik Strobelt
Ăgata Lapedriza
Bolei Zhou
Antonio Torralba
2
+
PDF
Chat
3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation
2016
ĂzgĂŒn Ăiçek
Ahmed Abdulkadir
Soeren S. Lienkamp
Thomas Brox
Olaf Ronneberger
2
+
Understanding Neural Networks Through Deep Visualization
2015
Jason Yosinski
Jeff Clune
Anh Mai Nguyen
Thomas J. Fuchs
Hod Lipson
2
+
PDF
Chat
There and Back Again: Revisiting Backpropagation Saliency Methods
2020
Sylvestre-Alvise Rebuffi
Ruth Fong
Xu Ji
Andrea Vedaldi
2
+
Towards Automatic Concept-based Explanations
2019
Amirata Ghorbani
James Wexler
James Zou
Been Kim
2
+
Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models
2017
Wojciech Samek
Thomas Wiegand
KlausâRobert MĂŒller
2
+
SmoothGrad: removing noise by adding noise
2017
Daniel Smilkov
Nikhil Thorat
Been Kim
Fernanda Viégas
Martin Wattenberg
2
+
Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks
2016
Anh Mai Nguyen
Jason Yosinski
Jeff Clune
2
+
PDF
Chat
Deep Residual Learning for Image Recognition
2016
Kaiming He
Xiangyu Zhang
Shaoqing Ren
Jian Sun
2
+
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
2013
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
2
+
PDF
Chat
Interpretation of Neural Networks Is Fragile
2019
Amirata Ghorbani
Abubakar Abid
James Zou
2
+
Striving for Simplicity: The All Convolutional Net
2014
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
2
+
Visualizing Deep Neural Network Decisions: Prediction Difference Analysis
2017
Luisa Zintgraf
Taco Cohen
Tameem Adel
Max Welling
2
+
How to Explain Individual Classification Decisions
2009
David Baehrens
Timon Schroeter
Stefan Harmeling
Motoaki Kawanabe
Katja Hansen
KlausâRobert MĂŒller
2
+
Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks
2017
José Oramas
Kaili Wang
Tinne Tuytelaars
2
+
One weird trick for parallelizing convolutional neural networks
2014
Alex Krizhevsky
2
+
Enhancing the Extraction of Interpretable Information for Ischemic Stroke Imaging from Deep Neural Networks
2019
Erico Tjoa
Heng Guo
Yuhao Lu
Cuntai Guan
2
+
Investigating the influence of noise and distractors on the interpretation of neural networks
2016
Pieter-Jan Kindermans
Kristof T. SchĂŒtt
KlausâRobert MĂŒller
Sven DĂ€hne
1
+
Pattern Theory: From Representation to Inference
2007
Ulf Grenander
Michael I. Miller
1
+
PDF
Chat
A Pattern-Theoretic Characterization of Biological Growth
2007
Ulf Grenander
Anuj Srivastava
Sanjay Saini
1
+
PDF
Chat
Quantum-chemical insights from deep tensor neural networks
2017
Kristof T. SchĂŒtt
Farhad Arbabzadah
Stefan Chmiela
K. MĂŒller
Alexandre Tkatchenko
1
+
âWhy Should I Trust You?â: Explaining the Predictions of Any Classifier
2016
Marco Ribeiro
Sameer Singh
Carlos Guestrin
1
+
PDF
Chat
Understanding deep image representations by inverting them
2015
Aravindh Mahendran
Andrea Vedaldi
1
+
Hierarchical Question-Image Co-Attention for Visual Question Answering
2016
Jiasen Lu
Jianwei Yang
Dhruv Batra
Devi Parikh
1
+
Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model
2015
Benjamin Letham
Cynthia Rudin
Tyler H. McCormick
David Madigan
1
+
Learning how to explain neural networks: PatternNet and PatternAttribution
2017
Pieter Jan Kindermans
Kristof T. SchĂŒtt
Maximilian Alber
K. MĂŒller
Dumitru Erhan
Been Kim
Sven DĂ€hne
1