Communication in Human-Agent Teams for Tasks with Joint Action

In many scenarios, humans must team with agents to achieve joint aims. When working collectively in a team of human and artificial agents, communication is important to establish a shared situation of the task at hand. With no human in the loop and little cost for communication, information about the task can be easily exchanged. However, when communication becomes expensive, or when there are humans in the loop, the strategy for sharing information must be carefully designed: too little information leads to lack of shared situation awareness, while too much overloads the human team members, decreasing performance overall. This paper investigates the effects of sharing beliefs and goals in agent teams and in human-agent teams. We performed a set of experiments using the BlocksWorlds for Teams (BW4T) testbed to assess different strategies for information sharing. In previous experimental studies using BW4T, explanations about agent behaviour were shown to have no effect on team performance. One possible reason for this is because the existing scenarios in BW4T contained joint tasks, but not joint actions. That is, atomic actions that required interdependent and simultaneous action between more than one agent. We implemented new scenarios in BW4T in which some actions required two agents to complete. Our results showed an improvement in artificial-agent team performance when communicating goals and sharing beliefs, but with goals contributing more to team performance, and that in human-agent teams, communicating only goals was more effective than communicating both goals and beliefs.

[1]  Olivier Klein,et al.  Grounding: Sharing Information in Social Interaction , 2007 .

[2]  Jeffrey M. Bradshaw,et al.  Explanation and Coordination in Human-Agent Teams: A Study in the BW4T Testbed , 2011, 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology.

[3]  Tim Miller,et al.  A Preliminary Analysis of Interdependence in Multiagent Systems , 2014, PRIMA.

[4]  Jeffrey M. Bradshaw,et al.  Explanation in Human-Agent Teamwork , 2011, COIN@AAMAS&WI-IAT.

[5]  Maarten Sierhuis,et al.  The Fundamental Principle of Coactive Design: Interdependence Must Shape Autonomy , 2010, COIN@AAMAS&MALLOW.

[6]  Jeffrey M. Bradshaw,et al.  From Tools to Teammates: Joint Activity in Human-Agent-Robot Teams , 2009, HCI.

[7]  Mary Missy Cummings,et al.  Man versus Machine or Man + Machine? , 2014, IEEE Intelligent Systems.

[8]  Eduardo Salas,et al.  Situation Awareness in Team Performance: Implications for Measurement and Training , 1995, Hum. Factors.

[9]  Koen V. Hindriks,et al.  The Role of Communication in Coordination Protocols for Cooperative Robot Teams , 2014, ICAART.

[10]  Thorbjørn Knudsen,et al.  Organization Design: The Epistemic Interdependence Perspective , 2010 .

[11]  E. Salas,et al.  Reflections on shared cognition , 2001 .

[12]  Janice Langan-Fox,et al.  Corrigendum to “Human-automation teams and adaptable control for future air traffic management” [Int J Ind Ergon (39) (2009) 894–903] , 2011 .

[13]  R. Saavedra,et al.  Complex interdependence in task-performing groups , 1993 .

[14]  Pamela J. Hinds,et al.  Autonomy and Common Ground in Human-Robot Interaction: A Field Study , 2007, IEEE Intelligent Systems.

[15]  Catholijn M. Jonker,et al.  Joint Activity Testbed: Blocks World for Teams (BW4T) , 2009, ESAW.

[16]  Koen V. Hindriks,et al.  Programming Rational Agents in GOAL , 2009, Multi-Agent Programming, Languages, Tools and Applications.

[17]  Janice Langan-Fox,et al.  Human–automation teams and adaptable control for future air traffic management , 2009 .

[18]  Maarten Sierhuis,et al.  Coactive design , 2014, J. Hum. Robot Interact..