Trust triggers for multimodal command and control interfaces

For autonomous systems to be accepted by society and operators, they have to instil the appropriate level of trust. In this paper, we discuss what dimensions constitute trust and examine certain triggers of trust for an autonomous underwater vehicle, comparing a multimodal command and control interface with a language-only reporting system. We conclude that there is a relationship between perceived trust and the clarity of a user's Mental Model and that this Mental Model is clearer in a multimodal condition, compared to language-only. Regarding trust triggers, we are able to show that a number of triggers, such as anomalous sensor readings, noticeably modify the perceived trust of the subjects, but in an appropriate manner, thus illustrating the utility of the interface.

[1]  Elisabeth André,et al.  Managing user trust for self-adaptive ubiquitous computing systems , 2010, MoMM.

[2]  LiangYuhua,et al.  Advancing the Strategic Messages Affecting Robot Trust Effect: The Dynamic of User- and Robot-Generated Content on Human–Robot Trust and Interaction Outcomes , 2016 .

[3]  Luciano Floridi,et al.  Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation , 2017 .

[4]  D. Lane,et al.  The importance of trust between operator and AUV: Crossing the human/computer language barrier , 2007, OCEANS 2007 - Europe.

[5]  Mary L. Cummings,et al.  The Need for Command and Control Instant Message Adaptive Interfaces: Lessons Learned from Tactical Tomahawk Human-in-the-Loop Simulations , 2004, Cyberpsychology Behav. Soc. Netw..

[6]  Nava Tintarev,et al.  SAsSy - Making Decisions Transparent with Argumentation and Natural Language Generation , 2014 .

[7]  Dimitra Gkatzia,et al.  Natural Language Generation enhances human decision-making with uncertain information , 2016, ACL.

[8]  Florian Jentsch,et al.  Building Appropriate Trust in Human-Robot Teams , 2013, AAAI Spring Symposium: Trust and Autonomous Systems.

[9]  Pedro Patron,et al.  Talking autonomous vehicles: Automatic AUV mission analysis in natural language , 2017, OCEANS 2017 - Aberdeen.

[10]  Nathan Schneider,et al.  Association for Computational Linguistics: Human Language Technologies , 2011 .

[11]  Nava Tintarev,et al.  Demo: making plans scrutable with argumentation and natural language generation , 2014, IUI Companion '14.

[12]  Rocky Ross,et al.  Mental models , 2004, SIGA.

[13]  Joanna Lumsden Triggering trust: to what extent does the question influence the answer when evaluating the perceived importance of trust triggers? , 2009, BCS HCI.

[14]  Gregory A. Sanders,et al.  DARPA communicator: cross-system results for the 2001 evaluation , 2002, INTERSPEECH.

[15]  P. Johnson-Laird Mental models , 1989 .

[16]  Yuhua Liang,et al.  Advancing the Strategic Messages Affecting Robot Trust Effect: The Dynamic of User- and Robot-Generated Content on Human-Robot Trust and Interaction Outcomes , 2016, Cyberpsychology Behav. Soc. Netw..

[17]  Ning Wang,et al.  The Impact of POMDP-Generated Explanations on Trust and Performance in Human-Robot Teams , 2016, AAMAS.

[18]  Yan Pailhas,et al.  Fully integrated multi-vehicles mine countermeasure missions , 2011 .

[19]  Helen F. Hastie,et al.  A demonstration of multimodal debrief generation for AUVs, post-mission and in-mission , 2016, ICMI.

[20]  Stephanie Rosenthal,et al.  Dynamic generation and refinement of robot verbalization , 2016, 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).