Novel Techniques to Assess Predictive Systems and Reduce Their Alarm Burden

Type: Preprint

Publication Date: 2021-02-10

Citations: 0

Abstract

Machine prediction algorithms (e.g., binary classifiers) often are adopted on the basis of claimed performance using classic metrics such as sensitivity and predictive value. However, classifier performance depends heavily upon the context (workflow) in which the classifier operates. Classic metrics do not reflect the realized utility of a predictor unless certain implicit assumptions are met, and these assumptions cannot be met in many common clinical scenarios. This often results in suboptimal implementations and in disappointment when expected outcomes are not achieved. One common failure mode for classic metrics arises when multiple predictions can be made for the same event, particularly when redundant true positive predictions produce little additional value. This describes many clinical alerting systems. We explain why classic metrics cannot correctly represent predictor performance in such contexts, and introduce an improved performance assessment technique using utility functions to score predictions based on their utility in a specific workflow context. The resulting utility metrics (u-metrics) explicitly account for the effects of temporal relationships on prediction utility. Compared to traditional measures, u-metrics more accurately reflect the real world costs and benefits of a predictor operating in a live clinical context. The improvement can be significant. We also describe a formal approach to snoozing, a mitigation strategy in which some predictions are suppressed to improve predictor performance by reducing false positives while retaining event capture. Snoozing is especially useful for predictors that generate interruptive alarms. U-metrics correctly measure and predict the performance benefits of snoozing, whereas traditional metrics do not.

Locations

  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ PDF Chat Novel Techniques to Assess Predictive Systems and Reduce Their Alarm Burden 2022 Jonathan A. Handler
Craig F. Feied
Michael T. Gillam
+ AutoScore: An Automated Warning Score Model for the Early Prediction of Clinical Events 2021 Ibrahim Hammoud
Prateek Prasanna
I. V. Ramakrishnan
Singer A
Mark C. Henry
Henry C. Thode
+ EventScore: An Automated Real-time Early Warning Score for Clinical Events 2021 Ibrahim Hammoud
Prateek Prasanna
I. V. Ramakrishnan
Adam J. Singer
Mark Henry
Henry C. Thode
+ PDF Chat EventScore: An Automated Real-time Early Warning Score for Clinical Events 2022 Ibrahim Hammoud
Prateek Prasanna
I. V. Ramakrishnan
Adam J. Singer
Mark C. Henry
Henry C. Thode
+ PDF Chat Dynamic Survival Analysis for Early Event Prediction 2024 Hugo Yèche
M. Burger
Dinara Veshchezerova
Gunnar Rätsch
+ PDF Chat Comparison of control charts for monitoring clinical performance using binary data 2017 Jenny Neuburger
Kate Walker
Chris Sherlaw‐Johnson
Jan van der Meulen
David Cromwell
+ PDF Chat Performance evaluation of predictive AI models to support medical decisions: Overview and guidance 2024 Ben Van Calster
Gary S. Collins
Andrew J. Vickers
Laure Wynants
Kathleen F. Kerr
Lasai BarreĂąada
GaĂŤl Varoquaux
Karandeep Singh
Karel Moons
Tina Hernandez‐Boussard
+ Explaining an increase in predicted risk for clinical alerts 2019 Michaela Hardt
Alvin Rajkomar
Gerardo Flores
Andrew M. Dai
Michael Howell
Greg S. Corrado
Claire Cui
Moritz Hardt
+ Explaining an increase in predicted risk for clinical alerts 2019 Michaela Hardt
Alvin Rajkomar
Gerardo Flores
Andrew M. Dai
Michael Howell
Greg S. Corrado
Claire Cui
Moritz Hardt
+ Investigating Poor Performance Regions of Black Boxes: LIME-based Exploration in Sepsis Detection 2023 Mozhgan Salimiparsa
Surajsinh Parmar
San Lee
C.E. Kim
Yong‐Hwan Kim
Jang Yong Kim
+ Performance metrics for intervention-triggering prediction models do not reflect an expected reduction in outcomes from using the model 2020 Alejandro Schuler
Aashish Bhardwaj
Jean‐Louis Vincent
+ Boosting the interpretability of clinical risk scores with intervention predictions 2022 Eric Loreaux
Ke Yu
Jonas Kemp
Martin Seneviratne
Christina Chen
Subhrajit Roy
Ivan Protsyuk
Natalie Harris
Alexander D’Amour
Steve Yadlowsky
+ New Epochs in AI Supervision: Design and Implementation of an Autonomous Radiology AI Monitoring System 2023 Vasantha Kumar Venugopal
Abhishek Gupta
Rohit Takhar
Vidur Mahajan
+ PDF Chat Classification of Methods to Reduce Clinical Alarm Signals for Remote Patient Monitoring: A Critical Review 2023 Teena Arora
Venki Balasubramanian
Andrew Stranieri
Mai Shenhan
Rajkumar Buyya
M N Islam
+ Predicting Clinical Deterioration in Hospitals 2021 Laleh Jalali
Hsiu-Khuern Tang
Richard H. Goldstein
Joaqun Alvarez Rodrguez
+ Predicting Clinical Deterioration in Hospitals 2021 Laleh Jalali
Hsiu-Khuern Tang
Richard H. Goldstein
Joaqun Alvarez Rodrguez
+ Feature Robustness in Non-stationary Health Records: Caveats to Deployable Model Performance in Common Clinical Machine Learning Tasks 2019 Bret Nestor
Matthew B. A. McDermott
Willie Boag
Gabriela Berner
Tristan Naumann
Michael C. Hughes
Anna Goldenberg
Marzyeh Ghassemi
+ Impact of novel aggregation methods for flexible, time-sensitive EHR prediction without variable selection or cleaning 2019 J. Deasy
Ari Ercole
PĂ­etro LiĂł
+ Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves 2016 Wen‐Chung Lee
Yun‐Chun Wu
+ Leveraging Clinical Time-Series Data for Prediction: A Cautionary Tale 2018 Eli Sherman
Hitinder S. Gurm
Ulysses J. Balis
Scott Owens
Jenna Wiens

Works That Cite This (0)

Action Title Year Authors

Works Cited by This (0)

Action Title Year Authors