Adaptive Trust Calibration for Supervised Autonomous Vehicles

Poor trust calibration in autonomous vehicles often degrades total system performance in safety or efficiency. Existing studies have primarily examined the importance of system transparency of autonomous systems to maintain proper trust calibration, with little emphasis on how to detect over-trust and under-trust nor how to recover from them. With the goal of addressing these research gaps, we first provide a framework to detect a calibration status on the basis of the user's behavior of reliance. We then propose a new concept with cognitive cues called trust calibration cues (TCCs) to trigger the user to quickly restore appropriate trust calibration. With our framework and TCCs, a novel method of adaptive trust calibration is explored in this study. We will evaluate our framework and examine the effectiveness of TCCs with a newly developed online drone simulator.

[1]  Holly A. Yanco,et al.  Methods for Developing Trust Models for Intelligent Systems , 2016 .

[2]  Jessie Y. C. Chen,et al.  A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction , 2011, Hum. Factors.

[3]  Ji Gao,et al.  Extending the decision field theory to model operators' reliance on automation in supervisory control situations , 2006, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[4]  Alexander G. Mirnig,et al.  First Workshop on Trust in the Age of Automated Driving , 2017, AutomotiveUI.

[5]  Josef F. Krems,et al.  Keep Your Scanners Peeled , 2016, Hum. Factors.

[6]  F. Walker Gaze Behaviour as a Measure of Trust in Automated Vehicles , 2018 .

[7]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004, Hum. Factors.

[8]  Raja Parasuraman,et al.  A Design Methodology for Trust Cue Calibration in Cognitive Agents , 2014, HCI.

[9]  Kevin Li,et al.  Evaluating Effects of User Experience and System Transparency on Trust in Automation , 2017, 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI.

[10]  David E. Smith,et al.  Shaping Trust Through Transparent Design: Theoretical and Experimental Guidelines , 2017 .

[11]  Göran Falkman,et al.  Presenting system uncertainty in automotive UIs for supporting trust calibration in autonomous driving , 2013, AutomotiveUI.

[12]  Martin Steinert,et al.  Displayed Uncertainty Improves Driving Experience and Behavior: The Case of Range Anxiety in an Electric Car , 2015, CHI.

[13]  Masooda Bashir,et al.  Trust in Automation , 2015, Hum. Factors.

[14]  Stephanie M. Merritt,et al.  Continuous Calibration of Trust in Automated Systems , 2014 .

[15]  Seiji Yamada,et al.  Artificial subtle expressions: intuitive notification methodology of artifacts , 2010, CHI.

[16]  Alan R. Wagner,et al.  Overtrust of robots in emergency evacuation scenarios , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[17]  Cees J. H. Midden,et al.  The effects of errors on system trust, self-confidence, and the allocation of control in route planning , 2003, Int. J. Hum. Comput. Stud..

[18]  Michael S. Wogalter,et al.  A three-stage model summarizes product warning and environmental sign research , 2014 .

[19]  Mark A. Changizi,et al.  Ecological warnings , 2013 .

[20]  Andrew J. Cowell,et al.  Manipulation of non-verbal interaction style and demographic embodiment to increase anthropomorphic computer character credibility , 2005, Int. J. Hum. Comput. Stud..