Intention Inference for Human-Robot Collaboration in Assistive Robotics

In this chapter, we present an algorithm to infer the intent of a human operator’s arm movements based on the observations from a Microsoft Kinect sensor. Intentions are modeled as the goal locations of reaching motions in the three-dimensional space. Human intention inference is a critical step towards realizing safe human-robot collaboration. We model a human arm’s nonlinear motion dynamics using an unknown nonlinear function with intentions represented as parameters. The unknown model is learned by using a neural network. Based on the learned model, an approximate expectation-maximization algorithm is developed to infer human intentions. Furthermore, an identifier-based online model-learning algorithm is developed to adapt to the variations in the arm motion dynamics, the motion trajectory, the goal locations, and the initial conditions of different human subjects. We present the results of the proposed algorithm via experiments conducted on Kinect data obtained from different users performing a variety of reaching motions. We also evaluate the performance of the algorithm by using Cornell’s CAD-120 dataset.

[1]  Dana Kulic,et al.  Estimating intent for human-robot interaction , 2003 .

[2]  Antonio Bicchi,et al.  Measuring intent in human-robot cooperative manipulation , 2009, 2009 IEEE International Workshop on Haptic Audio visual Environments and Games.

[3]  Jodie A. Baird,et al.  Discerning intentions in dynamic human action , 2001, Trends in Cognitive Sciences.

[4]  Dana Kulic,et al.  Affective State Estimation for Human–Robot Interaction , 2007, IEEE Transactions on Robotics.

[5]  Przemyslaw A. Lasota,et al.  Toward safe close-proximity human-robot interaction with standard industrial robots , 2014, 2014 IEEE International Conference on Automation Science and Engineering (CASE).

[6]  Tanja Schultz,et al.  Combined intention, activity, and motion recognition for a humanoid household robot , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Olaf Stursberg,et al.  Human arm motion modeling and long-term prediction for safe and efficient Human-Robot-Interaction , 2011, 2011 IEEE International Conference on Robotics and Automation.

[8]  Masayoshi Tomizuka,et al.  Ensuring safety in human-robot coexistence environment , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[9]  Stefanos Nikolaidis,et al.  Efficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasks , 2015, 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[10]  Ashwin P. Dani,et al.  Bayesian human intention inference through multiple model filtering with gaze-based priors , 2016, 2016 19th International Conference on Information Fusion (FUSION).

[11]  Bernhard Schölkopf,et al.  Probabilistic movement modeling for intention inference in human–robot interaction , 2013, Int. J. Robotics Res..

[12]  Carlos Morato,et al.  Toward Safe Human Robot Collaboration by Using Multiple Kinects Based Real-Time Human Tracking , 2014, J. Comput. Inf. Sci. Eng..

[13]  Michael A. Goodrich,et al.  Human-Robot Interaction: A Survey , 2008, Found. Trends Hum. Comput. Interact..

[14]  M. A. Simon,et al.  Understanding Human Action: Social Explanation and the Vision of Social Science. , 1983 .

[15]  David J. C. MacKay,et al.  Bayesian Interpolation , 1992, Neural Computation.

[16]  Warren E. Dixon,et al.  Asymptotic Tracking for Uncertain Dynamic Systems Via a Multilayer Neural Network Feedforward and RISE Feedback Control Structure , 2008, IEEE Transactions on Automatic Control.

[17]  Monica N. Nicolescu,et al.  Understanding human intentions via Hidden Markov Models in autonomous mobile robots , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[18]  Jos Elfring,et al.  Learning intentions for improved human motion prediction , 2013, 2013 16th International Conference on Advanced Robotics (ICAR).

[19]  Aude Billard,et al.  Estimating the non-linear dynamics of free-flying objects , 2012, Robotics Auton. Syst..

[20]  Ashwin P. Dani,et al.  Human intention inference through interacting multiple model filtering , 2015, 2015 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI).

[21]  Abderrahmane Kheddar,et al.  Motion learning and adaptive impedance for robot control during physical interaction with humans , 2011, 2011 IEEE International Conference on Robotics and Automation.

[22]  Masayoshi Tomizuka,et al.  Modeling and controller design of cooperative robots in workspace sharing human-robot assembly teams , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[23]  Tom Carey,et al.  Human-computer interaction , 1994 .

[24]  Illah R. Nourbakhsh,et al.  A survey of socially interactive robots , 2003, Robotics Auton. Syst..

[25]  Graham C. Goodwin,et al.  Estimation with Missing Data , 1999 .

[26]  Aude Billard,et al.  Catching Objects in Flight , 2014, IEEE Transactions on Robotics.

[27]  Uwe D. Hanebeck,et al.  A generic model for estimating user intentions in human-robot cooperation , 2005, ICINCO.

[28]  Jian Chen,et al.  A continuous asymptotic tracking control strategy for uncertain nonlinear systems , 2004, IEEE Transactions on Automatic Control.

[29]  Hema Swetha Koppula,et al.  Learning human activities and object affordances from RGB-D videos , 2012, Int. J. Robotics Res..

[30]  Luke S. Zettlemoyer,et al.  Learning to Parse Natural Language Commands to a Robot Control System , 2012, ISER.

[31]  Warren E. Dixon,et al.  Robust Identification-Based State Derivative Estimation for Nonlinear Systems , 2013, IEEE Transactions on Automatic Control.

[32]  V. Javier Traver,et al.  Making service robots human-safe , 2000, Proceedings. 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000) (Cat. No.00CH37113).

[33]  Siddhartha S. Srinivasa,et al.  Learning the communication of intent prior to physical collaboration , 2012, 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication.

[34]  Aleksej F. Filippov,et al.  Differential Equations with Discontinuous Righthand Sides , 1988, Mathematics and Its Applications.

[35]  Darius Burschka,et al.  Predicting human intention in visual observations of hand/object interactions , 2013, 2013 IEEE International Conference on Robotics and Automation.

[36]  Gwenn Englebienne,et al.  Learning to Recognize Human Activities from Soft Labeled Data , 2014, Robotics: Science and Systems.

[37]  Hema Swetha Koppula,et al.  Anticipating Human Activities Using Object Affordances for Reactive Robotic Response , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[38]  Ashwin P. Dani,et al.  Learning periodic motions from human demonstrations using transverse contraction analysis , 2016, 2016 American Control Conference (ACC).

[39]  Gwen Littlewort,et al.  Real Time Face Detection and Facial Expression Recognition: Development and Applications to Human Computer Interaction. , 2003, 2003 Conference on Computer Vision and Pattern Recognition Workshop.

[40]  Heinz Wörn,et al.  A novel approach to proactive human-robot cooperation , 2005, ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005..

[41]  Jeffrey C. Trinkle,et al.  Controller design for human-robot interaction , 2008, Auton. Robots.