Pluggable Social Artificial Intelligence for Enabling Human-Agent Teaming

As intelligent systems are increasingly capable of performing their tasks without the need for continuous human input, direction, or supervision, new human-machine interaction concepts are needed. A promising approach to this end is human-agent teaming, which envisions a novel interaction form where humans and machines behave as equal team partners. This paper presents an overview of the current state of the art in human-agent teaming, including the analysis of human-agent teams on five dimensions; a framework describing important teaming functionalities; a technical architecture, called SAIL, supporting social human-agent teaming through the modular implementation of the human-agent teaming functionalities; a technical implementation of the architecture; and a proof-of-concept prototype created with the framework and architecture. We conclude this paper with a reflection on where we stand and a glance into the future showing the way forward.

[1]  L. Gilson,et al.  Virtual Teams Research , 2015 .

[2]  John-Jules Ch. Meyer,et al.  An ontology for automated scenario-based training , 2014 .

[3]  Marialena Vagia,et al.  A literature review on the levels of automation during the years. What are the different taxonomies that have been proposed? , 2016, Applied ergonomics.

[4]  Dipti Srinivasan,et al.  An Introduction to Multi-Agent Systems , 2010 .

[5]  Roel Wieringa,et al.  Applications of deontic logic in computer science: a concise overview , 1994 .

[6]  Gary Schaub,et al.  In, On, or Out of the Loop?: Denmark and Autonomous Weapon Systems , 2017 .

[7]  Jenay M. Beer,et al.  Toward a framework for levels of robot autonomy in human-robot interaction , 2014, Journal of human-robot interaction.

[8]  Donald A. Norman,et al.  Some observations on mental models , 1987 .

[9]  Erik Cambria,et al.  Intention awareness: improving upon situation awareness in human-centric environments , 2013, Human-centric Computing and Information Sciences.

[10]  Nanja J. J. M. Smets,et al.  Content-Based Design and Implementation of Ambient Intelligence Applications , 2013, ISAmI.

[11]  Geoffrey E. Hinton,et al.  Distributed Representations , 1986, The Philosophy of Artificial Intelligence.

[12]  Andreas Tolk,et al.  Autonomous Systems: Issues for Defence Policymakers , 2015 .

[13]  Dana E. Sims,et al.  Is there a “Big Five” in Teamwork? , 2005 .

[14]  Stefanos Nikolaidis,et al.  Improved human–robot team performance through cross-training, an approach inspired by human team training practices , 2015, Int. J. Robotics Res..

[15]  Nick Heath-Brown,et al.  Center for a New American Security , 2015 .

[16]  Clint A. Bowers,et al.  The Impact of Cross-Training and Workload on Team Functioning: A Replication and Extension of Initial Findings , 1998, Hum. Factors.

[17]  R. A. Nelson,et al.  Common ground. , 2020, Lancet.

[18]  Mica R. Endsley,et al.  Situation awareness global assessment technique (SAGAT) , 1988, Proceedings of the IEEE 1988 National Aerospace and Electronics Conference.

[19]  Jeffrey M. Bradshaw,et al.  Coordination in Human-Agent-Robot Teamwork , 2008, 2008 International Symposium on Collaborative Technologies and Systems.

[20]  Guy H. Walker,et al.  State-of-science: situation awareness in individuals, teams and systems , 2017, Ergonomics.

[21]  Tina Mioch,et al.  Improving Adaptive Human-Robot Cooperation through Work Agreements , 2018, 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[22]  Jeffrey D. Anderson,et al.  Managing autonomy in robot teams: Observations from four experiments , 2007, 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[23]  Richard L. Daft,et al.  Organizational information requirements, media richness and structural design , 1986 .

[24]  E. Sundstrom,et al.  Work teams: Applications and effectiveness. , 1990 .

[25]  James S. Albus,et al.  4D/RCS Version 2.0: A Reference Model Architecture for Unmanned Vehicle Systems , 2002 .

[26]  Paul J. Feltovich,et al.  Common Ground and Coordination in Joint Activity , 2005 .

[27]  Andrew Howard,et al.  Design and use paradigms for Gazebo, an open-source multi-robot simulator , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[28]  N. Lehmann-Willenbrock,et al.  Meetings Matter , 2012 .

[29]  Mark A. Neerincx,et al.  Using Perceptual and Cognitive Explanations for Enhanced Human-Agent Team Performance , 2018, HCI.

[30]  Franco Turini,et al.  A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..

[31]  Siobhan Chapman Logic and Conversation , 2005 .

[32]  John Yen,et al.  Agents with shared mental models for enhancing team decision makings , 2006, Decis. Support Syst..

[33]  Jeffrey M. Bradshaw,et al.  Implementing Collective Obligations in Human-Agent Teams Using KAoS Policies , 2009, COIN@AAMAS&IJCAI&MALLOW.

[34]  B. Turner Center for a New American Security , 2014 .

[35]  Munindar P. Singh An ontology for commitments in multiagent systems: , 1999, Artificial Intelligence and Law.

[36]  Michael Wooldridge,et al.  Introduction to Multi-Agent Systems , 2016 .

[37]  Eduardo Salas,et al.  Situation Awareness in Team Performance: Implications for Measurement and Training , 1995, Hum. Factors.

[38]  B. Keysar,et al.  When do speakers take into account common ground? , 1996, Cognition.

[39]  Tina Mioch,et al.  Driver Readiness Model for Regulating the Transfer from Automation to Human Control , 2017, IUI.

[40]  vincent boulanin,et al.  MAPPING THE DEVELOPMENT OF AUTONOMY IN WEAPON SYSTEMS , 2016 .

[41]  Richard I. Cook,et al.  Nine Steps to Move Forward from Error , 2002, Cognition, Technology & Work.

[42]  M. V. Dignum,et al.  A Model for Organizational Interaction: based on Agents, founded in Logic , 2000 .

[43]  S. Brison The Intentional Stance , 1989 .

[44]  Y. Shoham Introduction to Multi-Agent Systems , 2002 .

[45]  H. H. Clark Arenas of language use , 1993 .

[46]  Maarten Sierhuis,et al.  Beyond Cooperative Robotics: The Central Role of Interdependence in Coactive Design , 2011, IEEE Intelligent Systems.

[47]  A. Koller,et al.  Speech Acts: An Essay in the Philosophy of Language , 1969 .

[48]  Robert Kramer,et al.  Collaborating: Finding Common Ground for Multiparty Problems , 1990 .

[49]  Peter Stone,et al.  Learning non-myopically from human-generated reward , 2013, IUI '13.

[50]  Nicholas R. Jennings,et al.  A Roadmap of Agent Research and Development , 2004, Autonomous Agents and Multi-Agent Systems.

[51]  Mica R. Endsley,et al.  Toward a Theory of Situation Awareness in Dynamic Systems , 1995, Hum. Factors.

[52]  T. E. de Greef,et al.  Adaptive Automation using an object-orientated task model , 2006 .

[53]  L. Bolman,et al.  Aviation accidents and the theory of the situation , 1980 .

[54]  Paul Nielsen,et al.  Defense Science Board Summer Study on Autonomy , 2016 .

[55]  Cindy L. Bethel,et al.  Discoveries from integrating robots into SWAT team training exercises , 2012, 2012 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR).

[56]  W. J. LeVeque,et al.  WHAT IS THE PROBLEM? , 2019, Paradoxes of Populism.

[57]  Frank Dignum,et al.  A Dynamic Coordination Mechanism Using Adjustable Autonomy , 2007, COIN.

[58]  Silvia Coradeschi,et al.  A Short Review of Symbol Grounding in Robotic and Intelligent Systems , 2013, KI - Künstliche Intelligenz.

[59]  Tyler H. Shaw,et al.  From ‘automation’ to ‘autonomy’: the importance of trust repair in human–machine interaction , 2018, Ergonomics.