| BMC Medical Informatics and Decision Making | |
| A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare | |
| Christopher M. Horvat1  Amie J. Barda2  Harry Hochheiser3  | |
| [1] Children’s Hospital of Pittsburgh of UPMC, 15224, Pittsburgh, PA, USA;Department of Critical Care Medicine, University of Pittsburgh School of Medicine, 15213, Pittsburgh, PA, USA;Safar Center for Resuscitation Research, University of Pittsburgh, 15224, Pittsburgh, PA, USA;Brain Care Institute, Children’s Hospital of Pittsburgh of UPMC, 15261, Pittsburgh, PA, USA;Department of Biomedical Informatics, School of Medicine, University of Pittsburgh, 5607 Baum Boulevard, 15206, Pittsburgh, PA, USA;Department of Biomedical Informatics, School of Medicine, University of Pittsburgh, 5607 Baum Boulevard, 15206, Pittsburgh, PA, USA;Intelligent Systems Program, University of Pittsburgh, 15213, Pittsburgh, PA, USA; | |
| 关键词: Machine learning; Explainable artificial intelligence; User-computer interface; Clinical decision support systems; In-hospital mortality; Pediatric intensive care units; | |
| DOI : 10.1186/s12911-020-01276-x | |
| 来源: Springer | |
PDF
|
|
【 摘 要 】
BackgroundThere is an increasing interest in clinical prediction tools that can achieve high prediction accuracy and provide explanations of the factors leading to increased risk of adverse outcomes. However, approaches to explaining complex machine learning (ML) models are rarely informed by end-user needs and user evaluations of model interpretability are lacking in the healthcare domain. We used extended revisions of previously-published theoretical frameworks to propose a framework for the design of user-centered displays of explanations. This new framework served as the basis for qualitative inquiries and design review sessions with critical care nurses and physicians that informed the design of a user-centered explanation display for an ML-based prediction tool.MethodsWe used our framework to propose explanation displays for predictions from a pediatric intensive care unit (PICU) in-hospital mortality risk model. Proposed displays were based on a model-agnostic, instance-level explanation approach based on feature influence, as determined by Shapley values. Focus group sessions solicited critical care provider feedback on the proposed displays, which were then revised accordingly.ResultsThe proposed displays were perceived as useful tools in assessing model predictions. However, specific explanation goals and information needs varied by clinical role and level of predictive modeling knowledge. Providers preferred explanation displays that required less information processing effort and could support the information needs of a variety of users. Providing supporting information to assist in interpretation was seen as critical for fostering provider understanding and acceptance of the predictions and explanations. The user-centered explanation display for the PICU in-hospital mortality risk model incorporated elements from the initial displays along with enhancements suggested by providers.ConclusionsWe proposed a framework for the design of user-centered displays of explanations for ML models. We used the proposed framework to motivate the design of a user-centered display of an explanation for predictions from a PICU in-hospital mortality risk model. Positive feedback from focus group participants provides preliminary support for the use of model-agnostic, instance-level explanations of feature influence as an approach to understand ML model predictions in healthcare and advances the discussion on how to effectively communicate ML model information to healthcare providers.
【 授权许可】
CC BY
【 预 览 】
| Files | Size | Format | View |
|---|---|---|---|
| RO202104275992676ZK.pdf | 1781KB |
PDF