A Markovian Method for Predicting Trust Behavior in Human-Agent Interaction

Trust calibration is critical to the success of human-agent interaction (HAI). However, individual differences are ubiquitous in people's trust relationships with autonomous systems. To assist its heterogeneous human teammates calibrate their trust in it, an agent must first dynamically model them as individuals, rather than communicating with them all in the same manner. It can then generate expectations of its teammates' behavior and optimize its own communication based on the current state of the trust relationship it has with them. In this work, we examine how an agent can generate accurate expectations given observations of only the teammate's trust-related behaviors (e.g., did the person follow or ignore its advice?). In addition to this limited input, we also seek a specific output: accurately predicting its human teammate's future trust behavior (e.g., will the person follow or ignore my next suggestion?). In this investigation, we construct a model capable of generating such expectations using data gathered in a human-subject study of behavior in a simulated human-robot interaction (HRI) scenario. We first analyze the ability of measures from a pre-survey on trust-related traits to accurately predict subsequent trust behaviors. However, as the interaction progresses, this effect is dwarfed by the direct experience. We therefore analyze the ability of sequences of prior behavior by the teammate to accurately predict subsequent trust behaviors. Such behavioral sequences have shown to be indicative of the subjective beliefs of other teammates, and we show here that they have a predictive power as well.

[1]  Robert P. Goldman,et al.  Plan, Activity, and Intent Recognition: Theory and Practice , 2014 .

[2]  Ning Wang,et al.  Clustering Behavior to Recognize Subjective Beliefs in Human-Agent Teams , 2018, AAMAS.

[3]  Charles J. Kacmar,et al.  Developing and Validating Trust Measures for e-Commerce: An Integrative Typology , 2002, Inf. Syst. Res..

[4]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.

[5]  Brian T. Gill,et al.  Children's Behavior toward and Understanding of Robotic and Living Dogs , 2009 .

[6]  Eric T. Bradlow,et al.  Promises and Lies: Restoring Violated Trust , 2004 .

[7]  John D. Lee,et al.  Trust, self-confidence, and operators' adaptation to automation , 1994, Int. J. Hum. Comput. Stud..

[8]  J. H. Davis,et al.  An Integrative Model Of Organizational Trust , 1995 .

[9]  V. Greco,et al.  Coping with uncertainty: the construction and validation of a new measure , 2001 .

[10]  David V. Pynadath,et al.  Building Trust in a Human-Robot Team with Automatically Generated Explanations , 2015 .

[11]  Zhihong Zeng,et al.  A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  I. Singh,et al.  Individual differences in monitoring failures of automation , 1993 .

[13]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[14]  A. Kerepesi,et al.  Behavioural comparison of human–animal (dog) and human–robot (AIBO) interactions , 2006, Behavioural Processes.

[15]  J. M. Ross Moderators Of Trust And Reliance Across Multiple Decision Aids , 2008 .

[16]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004, Hum. Factors.

[17]  Ning Wang,et al.  The Dynamics of Human-Agent Trust with POMDP-Generated Explanations , 2017, IVA.

[18]  K. Dautenhahn,et al.  The Negative Attitudes Towards Robots Scale and reactions to robot behaviour in a live Human-Robot Interaction study , 2009 .

[19]  N Moray,et al.  Trust, control strategies and allocation of function in human-machine systems. , 1992, Ergonomics.

[20]  Leslie Pack Kaelbling,et al.  Planning and Acting in Partially Observable Stochastic Domains , 1998, Artif. Intell..

[21]  R. Lewicki,et al.  Trust, trust development, and trust repair. , 2000 .