Assessing Demand for Transparency in Intelligent Systems Using Machine Learning

Intelligent systems offering decision support can lessen cognitive load and improve the efficiency of decision making in a variety of contexts. These systems assist users by evaluating multiple courses of action and recommending the right action at the right time. Modern intelligent systems using machine learning introduce new capabilities in decision support, but they can come at a cost. Machine learning models provide little explanation of their outputs or reasoning process, making it difficult to determine when it is appropriate to trust, or if not, what went wrong. In order to improve trust and ensure appropriate reliance on these systems, users must be afforded increased transparency, enabling an understanding of the systems reasoning, and an explanation of its predictions or classifications. Here we discuss the salient factors in designing transparent intelligent systems using machine learning, and present the results of a user-centered design study. We propose design guidelines derived from our study, and discuss next steps for designing for intelligent system transparency.

[1]  Yvonne Rogers,et al.  HCI Theory: Classical, Modern, and Contemporary , 2012, HCI Theory.

[2]  Bettina Berendt,et al.  Better decision support through exploratory discrimination-aware data mining: foundations and empirical evidence , 2014, Artificial Intelligence and Law.

[3]  K. Karahalios,et al.  "I always assumed that I wasn't really that close to [her]": Reasoning about Invisible Algorithms in News Feeds , 2015, CHI.

[4]  Karen Holtzblatt,et al.  Contextual Design: Evolved , 2014, Contextual Design: Evolved.

[5]  Izak Benbasat,et al.  Explanations From Intelligent Systems: Theoretical Foundations and Implications for Practice , 1999, MIS Q..

[6]  W. Stephenson The study of behavior : Q-technique and its methodology , 1955 .

[7]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004 .

[8]  W. Keith Edwards,et al.  Intelligibility and Accountability: Human Considerations in Context-Aware Systems , 2001, Hum. Comput. Interact..

[9]  Weng-Keen Wong,et al.  Principles of Explanatory Debugging to Personalize Interactive Machine Learning , 2015, IUI.

[10]  N. Hoffart Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory , 2000 .

[11]  Wang Yuji,et al.  The Trust Value Calculating for Social Network Based on Machine Learning , 2017, 2017 9th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC).

[12]  Edward H. Shortliffe,et al.  Rule Based Expert Systems: The Mycin Experiments of the Stanford Heuristic Programming Project (The Addison-Wesley series in artificial intelligence) , 1984 .

[13]  Herbert H. Clark,et al.  Grounding in communication , 1991, Perspectives on socially shared cognition.

[14]  Joshua B. Tenenbaum,et al.  Human-level concept learning through probabilistic program induction , 2015, Science.

[15]  Jeffrey M. Bradshaw,et al.  Ten Challenges for Making Automation a "Team Player" in Joint Human-Agent Activity , 2004, IEEE Intell. Syst..

[16]  John Riedl,et al.  Explaining collaborative filtering recommendations , 2000, CSCW '00.

[17]  Elizabeth Kaltenbach1,et al.  On the Dual Nature of Transparency and Reliability: Rethinking Factors that Shape Trust in Automation , 2017 .

[18]  N. Pennington,et al.  Reasoning in explanation-based decision making , 1993, Cognition.

[19]  Anind K. Dey,et al.  Assessing demand for intelligibility in context-aware applications , 2009, UbiComp.

[20]  Michael C. Dorneich,et al.  A Superior Tool for Airline Operations , 2004 .

[21]  Christopher A. Miller,et al.  Trust and etiquette in high-criticality automated systems , 2004, CACM.

[22]  Jean-Marc Robert,et al.  Trust in new decision aid systems , 2006, IHM '06.

[23]  Fang Chen,et al.  Making machine learning useable by revealing internal states update - a transparent approach , 2016, Int. J. Comput. Sci. Eng..

[24]  Eric Horvitz,et al.  Decision theory in expert systems and artificial intelligenc , 1988, Int. J. Approx. Reason..

[25]  Michael J. Pazzani,et al.  Representation of electronic mail filtering profiles: a user study , 2000, IUI '00.

[26]  Rashmi R. Sinha,et al.  The role of transparency in recommender systems , 2002, CHI Extended Abstracts.

[27]  Seth Flaxman,et al.  European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation" , 2016, AI Mag..

[28]  Been Kim,et al.  Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.

[29]  A. A. Clarke,et al.  A Co-Operative Computer Based on the Principles of Human Co-Operation , 1993, Int. J. Man Mach. Stud..

[30]  Peter Owotoki,et al.  Transparency of Computational Intelligence Models , 2006, SGAI Conf..

[31]  Asaf Degani The Crash of Korean Air Lines Flight 007 , 2003 .

[32]  Thomas G. Dietterich,et al.  Toward harnessing user feedback for machine learning , 2007, IUI '07.