Predicting User Intent Through Eye Gaze for Shared Autonomy

Shared autonomy combines user control of a robot with intelligent autonomous robot behavior to help people perform tasks more quickly and with less effort. Current shared autonomy frameworks primarily take direct user input, for example through a joystick, that directly controls the robot’s actions. However, indirect input, such as eye gaze, can be a useful source of information for revealing user intentions and future actions. For example, when people perform manipulation tasks, their gaze centers on the objects of interest before the corresponding movements even begin. This implicit information contained in eye gaze can be used to improve the goal prediction of a shared autonomy system, improving its overall assistive capability. In this paper, we describe how eye gaze behavior can be incorporated into shared autonomy. Building on previous work that represents user goals as latent states in a POMDP, we describe how gaze behavior can be used as observations to update the POMDP’s probability distributions over goal states, solving for the optimal action using hindsight optimization. We detail a pilot implementation that uses a head-mounted eye tracker to collect eye gaze data.

[1]  D. Ballard,et al.  Eye movements in natural behavior , 2005, Trends in Cognitive Sciences.

[2]  Katherine M. Tsui,et al.  Development and evaluation of a flexible interface for a wheelchair mounted robotic arm , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[3]  Bilge Mutlu,et al.  Using gaze patterns to predict task intent in collaboration , 2015, Front. Psychol..

[4]  Andreas Bulling,et al.  Pupil: an open source platform for pervasive eye tracking and mobile gaze-based interaction , 2014, UbiComp Adjunct.

[5]  Siddhartha S. Srinivasa,et al.  Planning-based prediction for pedestrians , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[6]  Dong-Soo Kwon,et al.  Human friendly interfaces of robotic manipulator control for handicapped persons , 2003, Proceedings 2003 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2003).

[7]  Siddhartha S. Srinivasa,et al.  A policy-blending formalism for shared control , 2013, Int. J. Robotics Res..

[8]  R. Johansson,et al.  Eye–Hand Coordination in Object Manipulation , 2001, The Journal of Neuroscience.

[9]  M. Hayhoe,et al.  In what ways do eye movements contribute to everyday activities? , 2001, Vision Research.

[10]  Bernhard Schölkopf,et al.  Probabilistic movement modeling for intention inference in human–robot interaction , 2013, Int. J. Robotics Res..

[11]  Francois Routhier,et al.  Evaluation of the JACO robotic arm: Clinico-economic study for powered wheelchair users with upper-extremity disabilities , 2011, 2011 IEEE International Conference on Rehabilitation Robotics.

[12]  Kris K. Hauser,et al.  Recognition, prediction, and planning for assisted teleoperation of freeform tasks , 2012, Autonomous Robots.

[13]  A. Graser,et al.  Rehabilitation robot FRIEND II - the general concept and current implementation , 2005, 9th International Conference on Rehabilitation Robotics, 2005. ICORR 2005..

[14]  Dong-Soo Kwon,et al.  Integration of a Rehabilitation Robotic System (KARES II) with Human-Friendly Man-Machine Interaction Units , 2004, Auton. Robots.

[15]  C. Koch,et al.  Computational modelling of visual attention , 2001, Nature Reviews Neuroscience.

[16]  Siddhartha S. Srinivasa,et al.  Shared Autonomy via Hindsight Optimization , 2015, Robotics: Science and Systems.

[17]  Anind K. Dey,et al.  Maximum Entropy Inverse Reinforcement Learning , 2008, AAAI.

[18]  Matei T. Ciocarlie,et al.  Mobile manipulation through an assistive home robot , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[19]  Leslie Pack Kaelbling,et al.  Learning Policies for Partially Observable Environments: Scaling Up , 1997, ICML.

[20]  Ali Borji,et al.  State-of-the-Art in Visual Attention Modeling , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[21]  Eckehard G. Steinbach,et al.  Disposal of explosive ordnances by use of a bimanual haptic telepresence system , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[22]  M. Hayhoe,et al.  The coordination of eye, head, and hand movements in a natural task , 2001, Experimental Brain Research.

[23]  Subbarao Kambhampati,et al.  Probabilistic Planning via Determinization in Hindsight , 2008, AAAI.

[24]  Hema Swetha Koppula,et al.  Anticipating Human Activities Using Object Affordances for Reactive Robotic Response , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[25]  Anind K. Dey,et al.  Probabilistic pointing target prediction via inverse optimal control , 2012, IUI '12.

[26]  Robert Givan,et al.  A framework for simulation-based network control via hindsight optimization , 2000, Proceedings of the 39th IEEE Conference on Decision and Control (Cat. No.00CH37187).

[27]  Chen Yu,et al.  Understanding Human Behaviors Based on Eye-Head-Hand Coordination , 2002, Biologically Motivated Computer Vision.

[28]  Siddhartha S. Srinivasa,et al.  Assistive teleoperation of robot arms via automatic time-optimal mode switching , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[29]  Michael Dorr,et al.  Large-Scale Optimization of Hierarchical Features for Saliency Prediction in Natural Images , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[30]  Jonathan Kofman,et al.  Teleoperation of a robot manipulator using a vision-based human-robot interface , 2005, IEEE Transactions on Industrial Electronics.

[31]  James M. Rehg,et al.  Learning to Predict Gaze in Egocentric Video , 2013, 2013 IEEE International Conference on Computer Vision.

[32]  Dana H. Ballard,et al.  Recognizing Behavior in Hand-Eye Coordination Patterns , 2009, Int. J. Humanoid Robotics.

[33]  Danica Kragic,et al.  Adaptive Virtual Fixtures for Machine-Assisted Teleoperation Tasks , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[34]  Martial Hebert,et al.  Autonomy Infused Teleoperation with Application to BCI Manipulation , 2015, Robotics: Science and Systems.

[35]  Peter Ford Dominey,et al.  I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation , 2012, Front. Neurorobot..

[36]  Robert Platt,et al.  Extracting User Intent in Mixed Initiative Teleoperator Control , 2004 .

[37]  Bilge Mutlu,et al.  Anticipatory robot control for efficient human-robot collaboration , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).