The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI

Abstract Artificial intelligence and algorithmic decision-making processes are increasingly criticized for their black-box nature. Explainable AI approaches to trace human-interpretable decision processes from algorithms have been explored. Yet, little is known about algorithmic explainability from a human factors’ perspective. From the perspective of user interpretability and understandability, this study examines the effect of explainability in AI on user trust and attitudes toward AI. It conceptualizes causability as an antecedent of explainability and as a key cue of an algorithm and examines them in relation to trust by testing how they affect user perceived performance of AI-driven services. The results show the dual roles of causability and explainability in terms of its underlying links to trust and subsequent user behaviors. Explanations of why certain news articles are recommended generate users trust whereas causability of to what extent they can understand the explanations affords users emotional confidence. Causability lends the justification for what and how should be explained as it determines the relative importance of the properties of explainability. The results have implications for the inclusion of causability and explanatory cues in AI systems, which help to increase trust and help users to assess the quality of explanations. Causable explainable AI will help people understand the decision-making process of AI algorithms by bringing transparency and accountability into AI systems.

[1]  Peter A. Flach,et al.  Explainability fact sheets: a framework for systematic assessment of explainable approaches , 2019, FAT*.

[2]  Davide Castelvecchi,et al.  Can we open the black box of AI? , 2016, Nature.

[3]  Donghee Shin,et al.  How do users interact with algorithm recommender systems? The interaction of users, algorithms, and performance , 2020, Comput. Hum. Behav..

[4]  Frank Biocca,et al.  Exploring immersive experience in journalism , 2018, New Media Soc..

[5]  Frank Biocca,et al.  Beyond user experience: What constitutes algorithmic experiences? , 2020, Int. J. Inf. Manag..

[6]  Lora Aroyo,et al.  The effects of transparency on trust in and acceptance of a content-based art recommender , 2008, User Modeling and User-Adapted Interaction.

[7]  Mark O. Riedl Human-Centered Artificial Intelligence and Machine Learning , 2019, Human Behavior and Emerging Technologies.

[8]  Davide Calvaresi,et al.  Explainable agents and robots , 2019, AAMAS 2019.

[9]  Andreas Holzinger,et al.  Interactive machine learning for health informatics: when do we need the human-in-the-loop? , 2016, Brain Informatics.

[10]  Jordi Vallverdú Approximate and Situated Causality in Deep Learning , 2019, Philosophies.

[11]  Cong Li,et al.  When does web-based personalization really work? The distinction between actual personalization and perceived personalization , 2016, Comput. Hum. Behav..

[12]  M. de Rijke,et al.  Do News Consumers Want Explanations for Personalized News Rankings , 2017 .

[13]  Ilyoo B. Hong,et al.  The mediating role of consumer trust in an online merchant in predicting purchase intention , 2013, Int. J. Inf. Manag..

[14]  Marko Sarstedt,et al.  Partial least squares structural equation modeling (PLS-SEM): An emerging tool in business research , 2014 .

[15]  Francisco Herrera,et al.  Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI , 2020, Inf. Fusion.

[16]  Konstantin Dörr,et al.  Ethical Challenges of Algorithmic Journalism , 2017 .

[17]  Avi Rosenfeld,et al.  Explainability in human–agent systems , 2019, Autonomous Agents and Multi-Agent Systems.

[18]  A. Hayes Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach , 2013 .

[19]  Göran Bolin,et al.  Heuristics of the algorithm: Big Data, user interpretation and institutional translation , 2015 .

[20]  Guang-Zhong Yang,et al.  XAI—Explainable artificial intelligence , 2019, Science Robotics.

[21]  Shelly Chaiken,et al.  A theory of heuristic and systematic information processing. , 2012 .

[22]  Victoria Alonso,et al.  System Transparency in Shared Autonomy: A Mini Review , 2018, Front. Neurorobot..

[23]  Donghee Shin,et al.  Role of fairness, accountability, and transparency in algorithmic affordance , 2019, Comput. Hum. Behav..

[24]  Donghee Shin,et al.  Toward Fair, Accountable, and Transparent Algorithms: Case Studies on Algorithm Initiatives in Korea and China , 2019, Javnost - The Public.

[25]  Joonhwan Lee,et al.  Designing an Algorithm-Driven Text Generation System for Personalized and Interactive News Reading , 2019, Int. J. Hum. Comput. Interact..

[26]  Dympna O'Sullivan,et al.  The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems , 2015, 2015 International Conference on Healthcare Informatics.

[27]  Trevor J. Bihl,et al.  A Preliminary Look at Heuristic Analysis for Assessing Artificial Intelligence Explainability , 2020, WSEAS TRANSACTIONS ON COMPUTER RESEARCH.

[28]  Arun Rai,et al.  Explainable AI: from black box to glass box , 2019, Journal of the Academy of Marketing Science.

[29]  Punam Bedi,et al.  Empowering recommender systems using trust and argumentation , 2014, Inf. Sci..

[30]  Kristopher J Preacher,et al.  Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models , 2008, Behavior research methods.

[31]  René F. Kizilcec How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface , 2016, CHI.

[32]  John Riedl,et al.  Recommender systems: from algorithms to user experience , 2012, User Modeling and User-Adapted Interaction.

[33]  Damian Trilling,et al.  Do not blame it on the algorithm: an empirical assessment of multiple recommender systems and their impact on content diversity , 2018 .

[34]  S. Chaiken,et al.  Motivated Heuristic and Systematic Processing , 1999 .

[35]  Damian Trilling,et al.  My Friends, Editors, Algorithms, and I , 2018, Digital Journalism.

[36]  Paul Resnick,et al.  Recommender systems , 1997, CACM.

[37]  S. Shyam Sundar,et al.  Rise of Machine Agency: A Framework for Studying the Psychology of Human-AI Interaction (HAII) , 2020, J. Comput. Mediat. Commun..

[38]  Alexander Binder,et al.  Evaluating the Visualization of What a Deep Neural Network Has Learned , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[39]  Upol Ehsan On Design and Evaluation of Human-centered Explainable AI systems , 2019 .

[40]  Shini Renjith,et al.  An extensive study on the evolution of context-aware personalized travel recommender systems , 2020, Inf. Process. Manag..

[41]  Georg Langs,et al.  Causability and explainability of artificial intelligence in medicine , 2019, WIREs Data Mining Knowl. Discov..

[42]  Andreas Holzinger,et al.  Measuring the Quality of Explanations: The System Causability Scale (SCS) , 2020, KI - Künstliche Intelligenz.

[43]  Seth Flaxman,et al.  European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation" , 2016, AI Mag..

[44]  S. Chaiken Heuristic versus systematic information processing and the use of source versus message cues in persuasion. , 1980 .

[45]  Matthew Crain,et al.  The limits of transparency: Data brokers and commodification , 2018, New Media Soc..