Clustering Behavior to Recognize Subjective Beliefs in Human-Agent Teams

Trust is critical to the success of human-agent teams, and a critical antecedents to trust is transparency. To best interact with human teammates, an agent explain itself so that they understand its decision-making process. However, individual differences among human teammates require that the agent dynamically adjust its explanation strategy based on their unobservable subjective beliefs. The agent must therefore recognize its teammates' subjective beliefs relevant to trust-building (e.g., their understanding of the agent's capabilities and process). We leverage a nonparametric method to enable an agent to use its history of prior interactions as a means for recognizing and predicting a new teammate's subjective beliefs. We first gather data combining observable behavior sequences with survey-based observations of typically unobservable perceptions. We then use a nearest-neighbor approach to identify the prior teammates most similar to the new one. We use these neighbors' responses to infer the likelihood of possible beliefs, as in collaborative filtering. The results provide insights into the types of beliefs that are easy (and hard) to infer from purely behavioral observations.

[1]  Brian T. Gill,et al.  Children's Behavior toward and Understanding of Robotic and Living Dogs , 2009 .

[2]  J. H. Davis,et al.  An Integrative Model Of Organizational Trust , 1995 .

[3]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.

[4]  John D. Lee,et al.  Trust, self-confidence, and operators' adaptation to automation , 1994, Int. J. Hum. Comput. Stud..

[5]  I. Singh,et al.  Individual differences in monitoring failures of automation , 1993 .

[6]  Katia Sycara,et al.  The role of trust in human-robot interaction , 2018 .

[7]  J. M. Ross Moderators Of Trust And Reliance Across Multiple Decision Aids , 2008 .

[8]  Regina A. Pomranky,et al.  The role of trust in automation reliance , 2003, Int. J. Hum. Comput. Stud..

[9]  Leslie Pack Kaelbling,et al.  Planning and Acting in Partially Observable Stochastic Domains , 1998, Artif. Intell..

[10]  Eric T. Bradlow,et al.  Promises and Lies: Restoring Violated Trust , 2004 .

[11]  Robert P. Goldman,et al.  Plan, Activity, and Intent Recognition: Theory and Practice , 2014 .

[12]  Zhihong Zeng,et al.  A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions , 2009, IEEE Trans. Pattern Anal. Mach. Intell..

[13]  John Riedl,et al.  Item-based collaborative filtering recommendation algorithms , 2001, WWW '01.

[14]  Dan Frankowski,et al.  Collaborative Filtering Recommender Systems , 2007, The Adaptive Web.

[15]  N Moray,et al.  Trust, control strategies and allocation of function in human-machine systems. , 1992, Ergonomics.

[16]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004 .

[17]  Ning Wang,et al.  The Impact of POMDP-Generated Explanations on Trust and Performance in Human-Robot Teams , 2016, AAMAS.

[18]  R. M. Taylor,et al.  Situational Awareness Rating Technique (Sart): The Development of a Tool for Aircrew Systems Design , 2017 .

[19]  Erika A. Waters,et al.  Formats for Improving Risk Communication in Medical Tradeoff Decisions , 2006, Journal of health communication.

[20]  A. Kerepesi,et al.  Behavioural comparison of human–animal (dog) and human–robot (AIBO) interactions , 2006, Behavioural Processes.

[21]  Ling Bao,et al.  Activity Recognition from User-Annotated Acceleration Data , 2004, Pervasive.

[22]  R. Lewicki,et al.  Trust, trust development, and trust repair. , 2000 .

[23]  S. Hart,et al.  Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research , 1988 .