Joint action understanding improves robot-to-human object handover

The development of trustworthy human-assistive robots is a challenge that goes beyond the traditional boundaries of engineering. Essential components of trustworthiness are safety, predictability and usefulness. In this paper we demonstrate that the integration of joint action understanding from human-human interaction into the human-robot context can significantly improve the success rate of robot-to-human object handover tasks. We take a two layer approach. The first layer handles the physical aspects of the handover. The robot's decision to release the object is informed by a Hidden Markov Model that estimates the state of the handover. Inspired by human-human handover observations, we then introduce a higher-level cognitive layer that models behaviour characteristic for a human user in a handover situation. In particular, we focus on the inclusion of eye gaze / head orientation into the robot's decision making. Our results demonstrate that by integrating these non-verbal cues the success rate of robot-to-human handovers can be significantly improved, resulting in a more robust and therefore safer system.

[1]  E. Gat On Three-Layer Architectures , 1997 .

[2]  Rodney A. Brooks,et al.  A Robust Layered Control Syste For A Mobile Robot , 2022 .

[3]  A. Kingstone,et al.  Human Social Attention , 2009, Annals of the New York Academy of Sciences.

[4]  Anthony G. Pipe,et al.  The BERT2 infrastructure: An integrated system for the study of human-robot interaction , 2010, 2010 10th IEEE-RAS International Conference on Humanoid Robots.

[5]  Anthony G. Pipe,et al.  Towards Safe Human-Robot Interaction , 2011, TAROS.

[6]  Thanh-Hung Nguyen,et al.  Toward a More Dependable Software Architecture for Autonomous Robots , 2008 .

[7]  ChengXiang Zhai,et al.  A Brief Note on the Hidden Markov Models ( HMMs ) , 2003 .

[8]  Siddhartha S. Srinivasa,et al.  Using spatial and temporal contrast for fluent robot-human hand-overs , 2011, 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[9]  Joseph Sifakis,et al.  Incremental Component-Based Construction and Verification of a Robotic System , 2008, ECAI.

[10]  D. Carey,et al.  Neuropsychological perspectives on eye-hand coordination in visually-guided reaching. , 2002, Progress in brain research.

[11]  Elizabeth A. Croft,et al.  Grip forces and load forces in handovers: Implications for designing human-robot handover controllers , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[12]  Alois Knoll,et al.  Interacting in time and space: Investigating human-human and human-robot joint action , 2010, 19th International Symposium in Robot and Human Interactive Communication.

[13]  J. Hoffman,et al.  The role of visual attention in saccadic eye movements , 1995, Perception & psychophysics.

[14]  F. Paas,et al.  Cognitive Load Theory and Instructional Design: Recent Developments , 2003 .

[15]  Charles C. Kemp,et al.  Human-Robot Interaction for Cooperative Manipulation: Handing Objects to One Another , 2007, RO-MAN 2007 - The 16th IEEE International Symposium on Robot and Human Interactive Communication.

[16]  Thanh-Hung Nguyen,et al.  Designing autonomous robots , 2009, IEEE Robotics & Automation Magazine.

[17]  Davide Bresolin,et al.  Correct-by-construction code generation from hybrid automata specification , 2011, 2011 7th International Wireless Communications and Mobile Computing Conference.

[18]  Cordula Vesper,et al.  A minimal architecture for joint action , 2010, Neural Networks.

[19]  S. Liversedge,et al.  Oxford handbook of eye movements , 2011 .

[20]  U. Castiello,et al.  Toward You , 2010 .

[21]  S. Shibata,et al.  An analysis of the process of handing over an object and its application to robot motions , 1997, 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation.

[22]  R. Muradore,et al.  Robotic Surgery , 2011, IEEE Robotics & Automation Magazine.

[23]  Paula Fitzpatrick,et al.  Understanding social motor coordination. , 2011, Human movement science.

[24]  L Fogassi,et al.  'direct' and 'indirect' Pathways from Monkey Mirror Neurons to Primate Behaviours: Possible References Subject Collections from Monkey Mirror Neurons to Primate Behaviours: Possible 'direct' and 'indirect' Pathways , 2009 .

[25]  Giorgio Metta,et al.  Safe and effective learning: A case study , 2010, 2010 IEEE International Conference on Robotics and Automation.

[26]  S. Baron-Cohen The Eye Direction Detector (EDD) and the Shared Attention Mechanism (SAM): Two cases for evolutionar , 1995 .

[27]  Yonghong Yan,et al.  Universal speech tools: the CSLU toolkit , 1998, ICSLP.

[28]  Lawrence R. Rabiner,et al.  A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.

[29]  Alois Knoll,et al.  Handing Over a Cube , 2009, Annals of the New York Academy of Sciences.

[30]  S. Tipper,et al.  Gaze cueing of attention: visual attention, social cognition, and individual differences. , 2007, Psychological bulletin.

[31]  Giorgio Metta,et al.  Towards long-lived robot genes , 2008, Robotics Auton. Syst..