Automatic selection of task spaces for imitation learning

Previous work [1] shows that the movement representation in task spaces offers many advantages for learning object-related and goal-directed movement tasks through imitation. It allows to reduce the dimensionality of the data that is learned and simplifies the correspondence problem that results from different kinematic structures of teacher and robot. Further, the task space representation provides a first generalization, for example wrt. differing absolute positions, if bi-manual movements are represented in relation to each other. Although task spaces are widely used, even if they are not mentioned explicitly, they are mostly defined a priori. This work is a step towards an automatic selection of task spaces. Observed movements are mapped into a pool of possibly even conflicting task spaces and we present methods that analyze this task space pool in order to acquire task space descriptors that match the observation best. As statistical measures cannot explain importance for all kinds of movements, the presented selection scheme incorporates additional criteria such as an attention-based measure. Further, we introduce methods that make a significant step from purely statistically-driven task space selection towards model-based movement analysis using a simulation of a complex human model. Effort and discomfort of the human teacher is being analyzed and used as a hint for important task elements. All methods are validated with real-world data, gathered using color tracking with a stereo vision system and a VICON motion capturing system.

[1]  A. Wing,et al.  Motor control: Mechanisms of motor equivalence in handwriting , 2000, Current Biology.

[2]  K. Rohlfing,et al.  Parental action modification highlighting the goal versus the means , 2008, 2008 7th IEEE International Conference on Development and Learning.

[3]  Aude Billard,et al.  Active Teaching in Robot Programming by Demonstration , 2007, RO-MAN 2007 - The 16th IEEE International Symposium on Robot and Human Interactive Communication.

[4]  Helge J. Ritter,et al.  Gestalt-based action segmentation for robot task learning , 2008, Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots.

[5]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[6]  Karim Abdel-Malek,et al.  Optimization-based trajectory planning of the human upper body , 2006, Robotica.

[7]  Jochen J. Steil,et al.  Task-level imitation learning using variance-based movement optimization , 2009, 2009 IEEE International Conference on Robotics and Automation.

[8]  Aude Billard,et al.  A framework integrating statistical and social cues to teach a humanoid robot new skills , 2008, ICRA 2008.

[9]  Michael Gienger,et al.  Task-oriented whole body motion for humanoid robots , 2005, 5th IEEE-RAS International Conference on Humanoid Robots, 2005..

[10]  Naoto Iwahashi,et al.  Motion recognition and generation by combining reference-point-dependent probabilistic models , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[11]  Jingzhou Yang,et al.  Towards a new generation of virtual humans , 2006 .

[12]  Yukie Nagai,et al.  Does Disturbance Discourage People from Communicating with a Robot? , 2007, RO-MAN 2007 - The 16th IEEE International Symposium on Robot and Human Interactive Communication.

[13]  Michael A. Arbib,et al.  Mirror neurons and imitation: A computationally guided review , 2006, Neural Networks.

[14]  Aude Billard,et al.  Discriminative and adaptive imitation in uni-manual and bi-manual tasks , 2006, Robotics Auton. Syst..

[15]  Huosheng Hu,et al.  Robot imitation: Body schema and body percept , 2005 .

[16]  Aude Billard,et al.  Goal-Directed Imitation in a Humanoid Robot , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[17]  Marc Toussaint,et al.  Optimization of sequential attractor-based movement for compact behaviour generation , 2007, 2007 7th IEEE-RAS International Conference on Humanoid Robots.

[18]  M. Matarić,et al.  Fixation behavior in observation and imitation of human movement. , 1998, Brain research. Cognitive brain research.

[19]  Katharina J. Rohlfing,et al.  Toward designing a robot that learns actions from parental demonstrations , 2008, 2008 IEEE International Conference on Robotics and Automation.

[20]  Tamim Asfour,et al.  Imitation Learning of Dual-Arm Manipulation Tasks in Humanoid Robots , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[21]  Sylvain Calinon,et al.  Continuous extraction of task constraints in a robot programming by demonstration framework , 2007 .

[22]  Jun Nakanishi,et al.  Towards compliant humanoids-an experimental assessment of suitable task space position/orientation controllers , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[23]  Michael Gienger,et al.  Exploiting Task Intervals for Whole Body Robot Control , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[24]  Jelle Jolles,et al.  Selective reaching: evidence for multiple frames of reference. , 2002, Journal of experimental psychology. Human perception and performance.