Anticipatory robot control for efficient human-robot collaboration

Efficient collaboration requires collaborators to monitor the behaviors of their partners, make inferences about their task intent, and plan their own actions accordingly. To work seamlessly and efficiently with their human counterparts, robots must similarly rely on predictions of their users' intent in planning their actions. In this paper, we present an anticipatory control method that enables robots to proactively perform task actions based on anticipated actions of their human partners. We implemented this method into a robot system that monitored its user's gaze, predicted his or her task intent based on observed gaze patterns, and performed anticipatory task actions according to its predictions. Results from a human-robot interaction experiment showed that anticipatory control enabled the robot to respond to user requests and complete the task faster-2.5 seconds on average and up to 3.4 seconds-compared to a robot using a reactive control method that did not anticipate user intent. Our findings highlight the promise of performing anticipatory actions for achieving efficient human-robot teamwork.

[1]  Yiannis Demiris,et al.  Towards Active Event Recognition , 2013, IJCAI.

[2]  Cynthia Breazeal,et al.  Effects of anticipatory action on human-robot teamwork: Efficiency, fluency, and perception of team , 2007, 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[3]  M. Land,et al.  The Roles of Vision and Eye Movements in the Control of Activities of Daily Living , 1998, Perception.

[4]  Bilge Mutlu,et al.  Using gaze patterns to predict task intent in collaboration , 2015, Front. Psychol..

[5]  C. Moore,et al.  Joint attention : its origins and role in development , 1995 .

[6]  A. Meltzoff,et al.  Like me” as a building block for understanding other minds: Bodily acts, attention, and intention. Ed. Malle, BF, L. J. Moses, and DA Baldwin , 2001 .

[7]  Mohan M. Trivedi,et al.  On the Roles of Eye Gaze and Head Dynamics in Predicting Driver's Intent to Change Lanes , 2009, IEEE Transactions on Intelligent Transportation Systems.

[8]  Hema Swetha Koppula,et al.  Anticipating Human Activities Using Object Affordances for Reactive Robotic Response , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  G. Pezzulo,et al.  Proactive action preparation: seeing action preparation as a continuous and proactive process. , 2012, Motor control.

[10]  A. Goldman,et al.  Mirror neurons and the simulation theory of mind-reading , 1998, Trends in Cognitive Sciences.

[11]  T. Kanda,et al.  Robot mediated round table: Analysis of the effect of robot's gaze , 2002, Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication.

[12]  Peter Ford Dominey,et al.  I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation , 2012, Front. Neurorobot..

[13]  Dmitry Berenson,et al.  Human-robot collaborative manipulation planning using early prediction of human motion , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[14]  Julie A. Shah,et al.  Fast target prediction of human reaching motion for cooperative human-robot manipulation tasks using time series classification , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[15]  G. Butterworth The ontogeny and phylogeny of joint visual attention. , 1991 .

[16]  H. Bekkering,et al.  Joint action: bodies and minds moving together , 2006, Trends in Cognitive Sciences.

[17]  Dylan F. Glas,et al.  How to Approach Humans?-Strategies for Social Robots to Initiate Interaction- , 2010 .

[18]  Natalie Sebanz,et al.  Prediction in Joint Action: What, When, and Where , 2009, Top. Cogn. Sci..

[19]  Takayuki Kanda,et al.  Nonverbal leakage in robots: Communication of intentions through seemingly unintentional behavior , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[20]  Steven M. LaValle,et al.  RRT-connect: An efficient approach to single-query path planning , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).

[21]  M. Tomasello Why We Cooperate , 2009 .

[22]  Dana H. Ballard,et al.  Recognizing Behavior in Hand-Eye Coordination Patterns , 2009, Int. J. Humanoid Robotics.

[23]  Dare A. Baldwin,et al.  Intentions and Intentionality: Foundations of Social Cognition , 2001 .

[24]  Siddhartha S. Srinivasa,et al.  Legibility and predictability of robot motion , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[25]  Aaron F. Bobick,et al.  Anticipating human actions for collaboration in the presence of task and sensor uncertainty , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[26]  C. Frith,et al.  How we predict what other people are going to do , 2006, Brain Research.

[27]  Cynthia Breazeal,et al.  Action parsing and goal inference using self as simulator , 2005, ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005..

[28]  J. Gibson,et al.  Perception of another person's looking behavior. , 1963, The American journal of psychology.

[29]  M. Tomasello Joint attention as social cognition. , 1995 .

[30]  Raymond H. Cuijpers,et al.  Joint Action: Neurocognitive Mechanisms Supporting Human Interaction , 2009, Top. Cogn. Sci..

[31]  R. Johansson,et al.  Eye–Hand Coordination in Object Manipulation , 2001, The Journal of Neuroscience.

[32]  Sandra L. Schneider Experimental design in the behavioral and social sciences , 2013 .

[33]  Christopher A. Dickinson,et al.  Coordinating cognition: The costs and benefits of shared gaze during collaborative search , 2008, Cognition.