AI Healthcare System Interface: Explanation Design for Non-Expert User Trust

Research indicates that non-expert users tend to either over-trust or distrust AI systems. This raises concerns when AI is applied to healthcare, where a patient trusting the advice of an unreliable system, or completely distrusting a reliable one, can lead to fatal incidents or missed healthcare opportunities. Previous research indicated that explanations can help users to make appropriate judgements on AI Systems’ trust, but how to design AI explanation interfaces for non-expert users in a medical support scenarios is still an open research challenge. This paper explores a stage-based participatory design process to develop a trustworthy explanation interface for non-experts in an AI medical support scenario. A trustworthy explanation is an explanation that helps users to make considered judgments on trusting (or not) and AI system for their healthcare. The objective of this paper was to identify the explanation components that can effectively inform the design of a trustworthy explanation interface. To achieve that, we undertook three data collections, examining experts’ and non-experts’ perceptions of AI medical support system’s explanations. We then developed a User Mental Model, an Expert Mental Model, and a Target Mental Model of explanation, describing how non-expert and experts understand explanations, how their understandings differ, and how it can be combined. Based on the Target Mental Model, we then propose a set of 14 explanation design guidelines for trustworthy AI Healthcare System explanation, that take into account non-expert users needs, medical experts practice, and AI experts understanding.

[1]  Todd Kulesza,et al.  Tell me more?: the effects of mental model soundness on personalizing an intelligent agent , 2012, CHI.

[2]  Mark Bilandzic,et al.  Bringing Transparency Design into Practice , 2018, IUI.

[3]  B. Malle How the Mind Explains Behavior: Folk Explanations, Meaning, and Social Interaction , 2004 .

[4]  Li Chen,et al.  Trust building with explanation interfaces , 2006, IUI '06.

[5]  Michael R. Cohen,et al.  Understanding Human Over-reliance on Technology; It's Exelan, Not Exelon; Crash Cart Drug Mix-up; Risk with Entering a “Test Order” , 2017, Hospital pharmacy.

[6]  Anind K. Dey,et al.  Design of an intelligible mobile context-aware application , 2011, Mobile HCI.

[7]  Detmar W. Straub,et al.  Trust and TAM in Online Shopping: An Integrated Model , 2003, MIS Q..

[8]  Danna Zhou,et al.  d. , 1840, Microbial pathogenesis.

[9]  Zachary C. Lipton,et al.  The Doctor Just Won't Accept That! , 2017, 1711.08037.

[10]  Joel J Heidelbaugh "Too Much?" , 2017, Primary care.

[11]  V. Barak,et al.  [Male breast cancer]. , 1989, Harefuah.

[12]  V. Braun,et al.  Using thematic analysis in psychology , 2006 .

[13]  R. Buckman How to break bad news : a guide for health care professionals , 1992 .

[14]  S. Mcphee,et al.  Beyond breaking bad news: how to help patients who suffer. , 1999, The Western journal of medicine.

[15]  Peter Brusilovsky,et al.  Designing Explanation Interfaces for Transparency and Beyond , 2019, IUI Workshops.

[16]  Grant S. Taylor,et al.  Individual differences in response to automation: the five factor model of personality. , 2011, Journal of experimental psychology. Applied.

[17]  Dympna O'Sullivan,et al.  The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems , 2015, 2015 International Conference on Healthcare Informatics.

[18]  Tim Miller,et al.  Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..

[19]  Yan Liu,et al.  Distilling Knowledge from Deep Networks with Applications to Healthcare Domain , 2015, ArXiv.

[20]  D. Hilton Conversational processes and causal explanation. , 1990 .

[21]  Anind K. Dey,et al.  Evaluating Intelligibility Usage and Usefulness in a Context-Aware Application , 2013, HCI.

[22]  David E. Smith,et al.  Shaping Trust Through Transparent Design: Theoretical and Experimental Guidelines , 2017 .

[23]  A. Strauss,et al.  The discovery of grounded theory: strategies for qualitative research aldine de gruyter , 1968 .

[24]  Christian Biemann,et al.  What do we need to build explainable AI systems for the medical domain? , 2017, ArXiv.

[25]  Risk , 2020, Journal of paediatrics and child health.

[26]  Charles T. Scialfa,et al.  Age differences in trust and reliance of a medication management system , 2005, Interact. Comput..

[27]  Chris Russell,et al.  Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.

[28]  Haipeng Shen,et al.  Artificial intelligence in healthcare: past, present and future , 2017, Stroke and Vascular Neurology.

[29]  Jimeng Sun,et al.  RETAIN: An Interpretable Predictive Model for Healthcare using Reverse Time Attention Mechanism , 2016, NIPS.

[30]  Enrico Motta,et al.  The Effect of Explanation Styles on User's Trust , 2020, ExSS-ATEC@IUI.

[31]  Lora Aroyo,et al.  The effects of transparency on trust in and acceptance of a content-based art recommender , 2008, User Modeling and User-Adapted Interaction.

[32]  W. Baile,et al.  SPIKES-A six-step protocol for delivering bad news: application to the patient with cancer. , 2000, The oncologist.

[33]  A. Alaszewski Risk, Trust and Health , 2003 .

[34]  Paul N. Bennett,et al.  Guidelines for Human-AI Interaction , 2019, CHI.

[35]  Weng-Keen Wong,et al.  Too much, too little, or just right? Ways explanations impact end users' mental models , 2013, 2013 IEEE Symposium on Visual Languages and Human Centric Computing.