Learned human-agent decision-making, communication and joint action in a virtual reality environment

Humans make decisions and act alongside other humans to pursue both short-term and long-term goals. As a result of ongoing progress in areas such as computing science and automation, humans now also interact with non-human agents of varying complexity as part of their day-to-day activities; substantial work is being done to integrate increasingly intelligent machine agents into human work and play. With increases in the cognitive, sensory, and motor capacity of these agents, intelligent machinery for human assistance can now reasonably be considered to engage in joint action with humans---i.e., two or more agents adapting their behaviour and their understanding of each other so as to progress in shared objectives or goals. The mechanisms, conditions, and opportunities for skillful joint action in human-machine partnerships is of great interest to multiple communities. Despite this, human-machine joint action is as yet under-explored, especially in cases where a human and an intelligent machine interact in a persistent way during the course of real-time, daily-life experience. In this work, we contribute a virtual reality environment wherein a human and an agent can adapt their predictions, their actions, and their communication so as to pursue a simple foraging task. In a case study with a single participant, we provide an example of human-agent coordination and decision-making involving prediction learning on the part of the human and the machine agent, and control learning on the part of the machine agent wherein audio communication signals are used to cue its human partner in service of acquiring shared reward. These comparisons suggest the utility of studying human-machine coordination in a virtual reality environment, and identify further research that will expand our understanding of persistent human-machine joint action.

[1]  Peter Dayan,et al.  A Neural Substrate of Prediction and Reward , 1997, Science.

[2]  Zoubin Ghahramani,et al.  Perspectives and problems in motor learning , 2001, Trends in Cognitive Sciences.

[3]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[4]  H. Bekkering,et al.  Joint action: bodies and minds moving together , 2006, Trends in Cognitive Sciences.

[5]  Giovanni Pezzulo,et al.  What should I do next? Using shared representations to solve interaction problems , 2011, Experimental Brain Research.

[6]  N. Sebanz,et al.  Psychological research on joint action: Theory and data , 2011 .

[7]  Estela Bicho,et al.  Neuro-cognitive mechanisms of decision making in joint action: a human-robot interaction study. , 2011, Human movement science.

[8]  Patrick M. Pilarski,et al.  Real-time prediction learning for the simultaneous actuation of multiple prosthetic joints , 2013, 2013 IEEE 13th International Conference on Rehabilitation Robotics (ICORR).

[9]  Elizabeth A. Croft,et al.  Design and impact of hesitation gestures during human-robot resource conflicts , 2013, HRI 2013.

[10]  Richard S. Sutton,et al.  Prediction Driven Behavior: Learning Predictions that Drive Fixed Responses , 2014, AAAI 2014.

[11]  Patrick M. Pilarski,et al.  Using Learned Predictions as Feedback to Improve Control and Communication with an Artificial Limb: Preliminary Findings , 2014, ArXiv.

[12]  Patrick M. Pilarski,et al.  Communicative Capital for Prosthetic Agents , 2017, ArXiv.

[13]  Robert L. Whitwell,et al.  Predictive joint-action model: A hierarchical predictive approach to human cooperation , 2018, Psychonomic bulletin & review.