Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations

Type: Article

Publication Date: 2023-09-28

Citations: 44

DOI: https://doi.org/10.1145/3610219

Abstract

AI explanations are often mentioned as a way to improve human-AI decision-making, but empirical studies have not found consistent evidence of explanations' effectiveness and, on the contrary, suggest that they can increase overreliance when the AI system is wrong. While many factors may affect reliance on AI support, one important factor is how decision-makers reconcile their own intuition---beliefs or heuristics, based on prior knowledge, experience, or pattern recognition, used to make judgments---with the information provided by the AI system to determine when to override AI predictions. We conduct a think-aloud, mixed-methods study with two explanation types (feature- and example-based) for two prediction tasks to explore how decision-makers' intuition affects their use of AI predictions and explanations, and ultimately their choice of when to rely on AI. Our results identify three types of intuition involved in reasoning about AI predictions and explanations: intuition about the task outcome, features, and AI limitations. Building on these, we summarize three observed pathways for decision-makers to apply their own intuition and override AI predictions. We use these pathways to explain why (1) the feature-based explanations we used did not improve participants' decision outcomes and increased their overreliance on AI, and (2) the example-based explanations we used improved decision-makers' performance over feature-based explanations and helped achieve complementary human-AI performance. Overall, our work identifies directions for further development of AI decision-support systems and explanation methods that help decision-makers effectively apply their intuition to achieve appropriate reliance on AI.

Locations

  • Proceedings of the ACM on Human-Computer Interaction - View - PDF
  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations 2023 Valerie Chen
Q. Vera Liao
Jennifer Wortman Vaughan
Gagan Bansal
+ PDF Chat Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills 2024 Zana Buçinca
Siddharth Swaroop
Amanda E. Paluch
Finale Doshi‐Velez
Krzysztof Z. Gajos
+ PDF Chat A Meta-Analysis of the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making 2022 Max Schemmer
Patrick Hemmer
Maximilian Nitsche
Niklas Kühl
Michael Vössing
+ PDF Chat Does Explainable Artificial Intelligence Improve Human Decision-Making? 2021 Yasmeen Alufaisan
Laura R. Marusich
Jonathan Z. Bakdash
Yan Zhou
Murat Kantarcıoğlu
+ In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision Making 2023 Raymond Fok
Daniel S. Weld
+ Does Explainable Artificial Intelligence Improve Human Decision-Making? 2020 Yasmeen Alufaisan
Laura R. Marusich
Jonathan Z. Bakdash
Yan Zhou
Murat Kantarcıoğlu
+ PDF Chat Evaluating the Influences of Explanation Style on Human-AI Reliance 2024 Emma R. Casolin
Flora D. Salim
Ben R. Newell
+ In search of verifiability: Explanations rarely enable complementary performance in AI‐advised decision making 2024 Raymond Fok
Daniel S. Weld
+ On the Interdependence of Reliance Behavior and Accuracy in AI-Assisted Decision-Making 2023 Jakob Schoeffer
Johannes Jakubik
Michael Voessing
Niklas Kuehl
Gerhard Satzger
+ AI Reliance and Decision Quality: Fundamentals, Interdependence, and the Effects of Interventions 2024 Jakob Schoeffer
Johannes Jakubik
Michael Vössing
Niklas Kühl
Gerhard Satzger
+ Does Explainable Artificial Intelligence Improve Human Decision-Making? 2020 Yasmeen Alufaisan
Laura R. Marusich
Jonathan Z. Bakdash
Yan Zhou
Murat Kantarcıoğlu
+ The human-AI relationship in decision-making: AI explanation to support people on justifying their decisions 2021 Juliana Jansen Ferreira
Mateus de Souza Monteiro
+ The human-AI relationship in decision-making: AI explanation to support people on justifying their decisions 2021 Juliana Jansen Ferreira
Mateus de Souza Monteiro
+ Using AI Uncertainty Quantification to Improve Human Decision-Making 2023 Laura R. Marusich
Jonathan Z. Bakdash
Yan Zhou
Murat Kantarcıoğlu
+ Machine Explanations and Human Understanding 2022 Chacha Chen
Feng Shi
Amit Sharma
Chenhao Tan
+ Designing AI Support for Human Involvement in AI-assisted Decision Making: A Taxonomy of Human-AI Interactions from a Systematic Review 2023 Catalina Gómez
Sue Min Cho
Shichang Ke
Chien‐Ming Huang
Mathias Unberath
+ PDF Chat Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making 2020 Yunfeng Zhang
Q. Vera Liao
Rachel Bellamy
+ PDF Chat Utilizing Human Behavior Modeling to Manipulate Explanations in AI-Assisted Decision Making: The Good, the Bad, and the Scary 2024 Zhuoyan Li
Minghao Yin
+ Using AI Uncertainty Quantification to Improve Human Decision-Making 2023 Laura R. Marusich
Jonathan Z. Bakdash
Yan Zhou
Murat Kantarcıoğlu
+ PDF Chat Towards the new XAI: A Hypothesis-Driven Approach to Decision Support Using Evidence 2024 Thao N. Le
Tim Miller
Ronal Singh
Liz Sonenberg

Works That Cite This (17)

Action Title Year Authors
+ AXNav: Replaying Accessibility Tests from Natural Language 2023 Maryam Taeb
Amanda Swearngin
Eldon Schoop
Ruijia Cheng
Yue Jiang
Jeffrey Nichols
+ PDF Chat When Are Two Lists Better than One?: Benefits and Harms in Joint Decision-Making 2024 Kate Donahue
Sreenivas Gollapudi
Kostas Kollias
+ PDF Chat Towards Automated Accessibility Report Generation for Mobile Apps 2024 Amanda Swearngin
Jason Wu
Xiaoyi Zhang
Esteban Gómez
J.R. Coughenour
Rachel Stukenborg
Bhavya Garg
Greg Hughes
Adriana Hilliard
Jeffrey P. Bigham
+ PDF Chat Escalation Risks from Language Models in Military and Diplomatic Decision-Making 2024 Juan-Pablo Rivera
Gabriel Mukobi
Anka Reuel
Max Lamparth
Chandler Smith
Jacquelyn Schneider
+ The Impact of Imperfect XAI on Human-AI Decision-Making 2023 Katelyn Morrison
Philipp Spitzer
Violet Turri
Michelle Feng
Niklas Kühl
Adam Perer
+ PDF Chat The Impact of Imperfect XAI on Human-AI Decision-Making 2024 Katelyn Morrison
Philipp Spitzer
Violet Turri
Michelle Feng
Niklas Kühl
Adam Perer
+ PDF Chat Does More Advice Help? The Effects of Second Opinions in AI-Assisted Decision Making 2024 Zhuoran Lu
Dakuo Wang
Ming Yin
+ PDF Chat Explainable artificial intelligence: A survey of needs, techniques, applications, and future direction 2024 Melkamu Mersha
Khang Nhứt Lâm
Joseph Wood
Ali K. AlShami
Jugal Kalita
+ PDF Chat (De)Noise: Moderating the Inconsistency Between Human Decision-Makers 2024 Nina Grgić-Hlača
Junaid Ali
Krishna P. Gummadi
Jennifer Wortman Vaughan
+ When Are Combinations of Humans and AI Useful? 2024 Michelle Vaccaro
Abdullah Almaatouq
Thomas W. Malone

Works Cited by This (24)

Action Title Year Authors
+ Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision-Making 2019 Carrie J. Cai
Emily Reif
Narayan Hegde
Jason Hipp
Been Kim
Daniel Smilkov
Martin Wattenberg
Fernanda Viégas
Greg S. Corrado
Martin C. Stumpe
+ PDF Chat The mythos of model interpretability 2018 Zachary C. Lipton
+ PDF Chat Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization 2017 Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
+ PDF Chat Explaining Explanations: An Overview of Interpretability of Machine Learning 2018 Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
+ PDF Chat Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI 2019 Alejandro Barredo Arrieta
Natalia Díaz-Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
Alberto Barbado
Salvador García
Sergio Gil-López
Daniel Molina
Richard Benjamins
+ PDF Chat "Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials for Humans 2020 Vivian Lai
Han Liu
Chenhao Tan
+ PDF Chat Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach 2020 Upol Ehsan
Mark Riedl
+ PDF Chat Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making 2020 Yunfeng Zhang
Q. Vera Liao
Rachel Bellamy
+ A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Scores 2020 Maria De‐Arteaga
Riccardo Fogliato
Alexandra Chouldechova
+ PDF Chat “Brilliant AI Doctor” in Rural Clinics: Challenges in AI-Powered Clinical Decision Support System Deployment 2021 Dakuo Wang
Liuping Wang
Zhan Zhang
Ding Wang
Haiyi Zhu
Yvonne Gao
Xiangmin Fan
Feng Tian