The Effect of Displaying System Confidence Information on the Usage of Autonomous Systems for Non-specialist Applications: A Lab Study

Autonomous systems are designed to take actions on behalf of users, acting autonomously upon data from sensors or online sources. As such, the design of interaction mechanisms that enable users to understand the operation of autonomous systems and flexibly delegate or regain control is an open challenge for HCI. Against this background, in this paper we report on a lab study designed to investigate whether displaying the confidence of an autonomous system about the quality of its work, which we call its confidence information, can improve user acceptance and interaction with autonomous systems. The results demonstrate that confidence information encourages the usage of the autonomous system we tested, compared to a situation where such information is not available. Furthermore, an additional contribution of our work is the method we employ to study users' incentives to do work in collaboration with the autonomous system. In experiments comparing different incentive strategies, our results indicate that our translation of behavioural economics research methods to HCI can support the study of interactions with autonomous systems in the lab.

[1]  Jijun Wang Cooperating Robots for Search and Rescue , 2006 .

[2]  Eyal de Lara,et al.  An Exploration of Location Error Estimation , 2007, UbiComp.

[3]  Prasanna Velagapudi,et al.  Teams organization and performance in multi-human/multi-robot teams , 2010, 2010 IEEE International Conference on Systems, Man and Cybernetics.

[4]  Jodi Forlizzi,et al.  Mining behavioral economics to design persuasive technology for healthy choices , 2011, CHI.

[5]  Bernt Schiele,et al.  Towards improving trust in context-aware systems by displaying system confidence , 2005, Mobile HCI.

[6]  Tom Rodden,et al.  At home with agents: exploring attitudes towards future smart energy infrastructures , 2013, IJCAI.

[7]  Sarvapali D. Ramchurn,et al.  A field study of human-agent interaction for electricity tariff switching , 2014, AAMAS.

[8]  Wolfgang Effelsberg,et al.  A study on user acceptance of error visualization techniques , 2008, Mobiquitous 2008.

[9]  V. Braun,et al.  Using thematic analysis in psychology , 2006 .

[10]  Mark Vollrath,et al.  Improving the Driver–Automation Interaction , 2013, Hum. Factors.

[11]  Mark A. Neerincx,et al.  Adaptive Automation Based on an Object-Oriented Task Model: Implementation and Evaluation in a Realistic C2 Environment , 2010 .

[12]  Gerd Kortuem,et al.  Conversations with my washing machine: an in-the-wild study of demand shifting with self-generated energy , 2014, UbiComp.

[13]  Göran Falkman,et al.  Presenting system uncertainty in automotive UIs for supporting trust calibration in autonomous driving , 2013, AutomotiveUI.

[14]  Raja Parasuraman,et al.  Performance Consequences of Automation-Induced 'Complacency' , 1993 .

[15]  Nadine B. Sarter,et al.  Supporting Trust Calibration and the Effective Use of Decision Aids by Presenting Dynamic System Confidence Information , 2006, Hum. Factors.

[16]  D. Ariely,et al.  Man's search for meaning: The case of Legos , 2008 .

[17]  Philip J. Smith,et al.  Brittleness in the design of cooperative problem-solving systems: the effects on user performance , 1997, IEEE Trans. Syst. Man Cybern. Part A.

[18]  Thomas B. Sheridan,et al.  Human and Computer Control of Undersea Teleoperators , 1978 .

[19]  Michael A. Goodrich,et al.  Sliding Autonomy for UAV Path-Planning: Adding New Dimensions to Autonomy Management , 2015, AAMAS.

[20]  Linda G. Pierce,et al.  The Perceived Utility of Human and Automated Aids in a Visual Detection Task , 2002, Hum. Factors.

[21]  Mark W. Newman,et al.  Learning from a learning thermostat: lessons for intelligent systems for the home , 2013, UbiComp.

[22]  Wolfgang Effelsberg,et al.  A study on user acceptance of error visualization techniques , 2008, MobiQuitous.

[23]  Anind K. Dey,et al.  Investigating intelligibility for uncertain context-aware applications , 2011, UbiComp '11.

[24]  Sarvapali D. Ramchurn,et al.  A Study of Human-Agent Collaboration for Multi-UAV Task Allocation in Dynamic Environments , 2015, IJCAI.

[25]  Mark W. Newman,et al.  Making sustainability sustainable: challenges in the design of eco-interaction technologies , 2014, CHI.

[26]  A. Tversky,et al.  Choices, Values, and Frames , 2000 .

[27]  Lennart E. Nacke,et al.  From game design elements to gamefulness: defining "gamification" , 2011, MindTrek.

[28]  Steven Reece,et al.  Recommending energy tariffs and load shifting based on smart household usage profiling , 2013, IUI '13.

[29]  D. Kahneman Thinking, Fast and Slow , 2011 .

[30]  Jodi Forlizzi,et al.  Service robots in the domestic environment: a study of the roomba vacuum in the home , 2006, HRI '06.

[31]  Holly A. Yanco,et al.  Impact of robot failures and feedback on real-time trust , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[32]  Alexander De Luca,et al.  Visualization of uncertainty in context aware mobile applications , 2006, Mobile HCI.

[33]  Thibault Gateau,et al.  "Automation Surprise" in Aviation: Real-Time Solutions , 2015, CHI.

[34]  Nicholas R. Jennings,et al.  Human-agent collectives , 2014, CACM.

[35]  Jeffrey D. Anderson,et al.  Managing autonomy in robot teams: Observations from four experiments , 2007, 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[36]  Sarvapali D. Ramchurn,et al.  Doing the laundry with agents: a field trial of a future smart energy system in the home , 2014, CHI.

[37]  Bernt Schiele,et al.  Evaluating the Effects of Displaying Uncertainty in Context-Aware Applications , 2004, UbiComp.

[38]  Göran Falkman,et al.  Transparency of Automated Combat Classification , 2014, HCI.

[39]  Henrik I. Christensen,et al.  "My Roomba Is Rambo": Intimate Home Appliances , 2007, UbiComp.