Explainable AI improves task performance in human-AI collaboration

Type: Preprint

Publication Date: 2024-06-12

Citations: 0

DOI: https://doi.org/10.48550/arxiv.2406.08271

Abstract

Artificial intelligence (AI) provides considerable opportunities to assist human work. However, one crucial challenge of human-AI collaboration is that many AI algorithms operate in a black-box manner where the way how the AI makes predictions remains opaque. This makes it difficult for humans to validate a prediction made by AI against their own domain knowledge. For this reason, we hypothesize that augmenting humans with explainable AI as a decision aid improves task performance in human-AI collaboration. To test this hypothesis, we analyze the effect of augmenting domain experts with explainable AI in the form of visual heatmaps. We then compare participants that were either supported by (a) black-box AI or (b) explainable AI, where the latter supports them to follow AI predictions when the AI is accurate or overrule the AI when the AI predictions are wrong. We conducted two preregistered experiments with representative, real-world visual inspection tasks from manufacturing and medicine. The first experiment was conducted with factory workers from an electronics factory, who performed $N=9,600$ assessments of whether electronic products have defects. The second experiment was conducted with radiologists, who performed $N=5,650$ assessments of chest X-ray images to identify lung lesions. The results of our experiments with domain experts performing real-world tasks show that task performance improves when participants are supported by explainable AI instead of black-box AI. For example, in the manufacturing setting, we find that augmenting participants with explainable AI (as opposed to black-box AI) leads to a five-fold decrease in the median error rate of human decisions, which gives a significant improvement in task performance.

Locations

  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ Uncalibrated Models Can Improve Human-AI Collaboration 2022 Kailas Vodrahalli
Tobias Gerstenberg
James Zou
+ Machine Explanations and Human Understanding 2022 Chacha Chen
Feng Shi
Amit Sharma
Chenhao Tan
+ Machine Explanations and Human Understanding 2023 Chacha Chen
Shi Feng
Amit Sharma
Chenhao Tan
+ PDF Chat Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making 2020 Yunfeng Zhang
Q. Vera Liao
Rachel Bellamy
+ PDF Chat A Unified Framework for Evaluating the Effectiveness and Enhancing the Transparency of Explainable AI Methods in Real-World Applications 2024 Md. Ariful Islam
M. F. Mridha
Md Abrar Jahin
Nilanjan Dey
+ PDF Chat HIVE: Evaluating the Human Interpretability of Visual Explanations 2021 Sunnie Kim
Nicole Meister
Vidhya Ramaswamy
Ruth Fong
Olga Russakovsky
+ HIVE: Evaluating the Human Interpretability of Visual Explanations 2021 Sunnie Kim
Nicole Meister
Vidhya Ramaswamy
Ruth Fong
Olga Russakovsky
+ Optimising Human-AI Collaboration by Learning Convincing Explanations 2023 Alex J. Chan
Alihan Hüyük
Mihaela van der Schaar
+ Responsibility: An Example-based Explainable AI approach via Training Process Inspection 2022 Faraz Khadivpour
Arghasree Banerjee
Matthew Guzdial
+ The benefits and costs of explainable artificial intelligence in visual quality control: Evidence from fault detection performance and eye movements 2023 Romy Müller
David F. Reindel
Yannick D. Stadtfeld
+ PDF Chat Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations 2023 Valerie Chen
Q. Vera Liao
Jennifer Wortman Vaughan
Gagan Bansal
+ PDF Chat Evaluating the Influences of Explanation Style on Human-AI Reliance 2024 Emma R. Casolin
Flora D. Salim
Ben R. Newell
+ Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations 2023 Valerie Chen
Q. Vera Liao
Jennifer Wortman Vaughan
Gagan Bansal
+ The benefits and costs of explainable artificial intelligence in visual quality control: Evidence from fault detection performance and eye movements 2024 Romy Müller
David F. Reindel
Yannick D. Stadtfeld
+ PDF Chat Does Explainable Artificial Intelligence Improve Human Decision-Making? 2021 Yasmeen Alufaisan
Laura R. Marusich
Jonathan Z. Bakdash
Yan Zhou
Murat Kantarcıoğlu
+ Does Explainable Artificial Intelligence Improve Human Decision-Making? 2020 Yasmeen Alufaisan
Laura R. Marusich
Jonathan Z. Bakdash
Yan Zhou
Murat Kantarcıoğlu
+ Explain To Decide: A Human-Centric Review on the Role of Explainable Artificial Intelligence in AI-assisted Decision Making 2023 Milad Rogha
+ Does Explainable Artificial Intelligence Improve Human Decision-Making? 2020 Yasmeen Alufaisan
Laura R. Marusich
Jonathan Z. Bakdash
Yan Zhou
Murat Kantarcıoğlu
+ Human-AI Co-Learning for Data-Driven AI 2019 Yi‐Ching Huang
Yu-Ting Cheng
Linlin Chen
Jane Yung-jen Hsu
+ PDF Chat Confident Teacher, Confident Student? A Novel User Study Design for Investigating the Didactic Potential of Explanations and their Impact on Uncertainty 2024 Teodor Chiaburu
Frank Haußer
Felix Bießmann

Works That Cite This (0)

Action Title Year Authors

Works Cited by This (0)

Action Title Year Authors