On using guided motor primitives to execute Continuous Goal-Directed Actions

In this paper, we study how human-robot interaction can be beneficial on the Continuous Goal-Directed Actions (CGDA) framework. Specifically, a system for robot discovery of motor primitives from random human-guided movements has been developed. These guided motor primitives (GMP) are used as scaffolds to reproduce a goal-directed actions. CGDA encodes goals as the changes produced on object features (color, area, etc) due to actions. This paper focuses on using motor primitives extracted from human-guided random robot movements to execute these goal-directed actions. The human guides the robot joints in random movements, which are later divided in small segments. These segments are compared in terms of joint positions and selected to be diverse. To perform goal-directed actions, the robot must discover an adequate sequence of GMP. To discover these sequences we organize the primitives as a tree with incremental depths (where each node represents a primitive) and use a breadth-first search. In one of the experiments performed, the robot executes a task based on spatial object features. In the other experiment, the goal is to paint a wall by following a color feature trajectory.

[1]  Estela Bicho,et al.  Goal-directed imitation for robots: A bio-inspired approach to action understanding and skill learning , 2006, Robotics Auton. Syst..

[2]  Emre Ugur,et al.  Self-discovery of motor primitives and learning grasp affordances , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[3]  Stefan Schaal,et al.  Is imitation learning the route to humanoid robots? , 1999, Trends in Cognitive Sciences.

[4]  Aude Billard,et al.  Goal-Directed Imitation in a Humanoid Robot , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[5]  Giulio Sandini,et al.  Developmental action perception for manipulative interaction , 2013, 2013 IEEE International Conference on Robotics and Automation.

[6]  Toyoaki Nishida,et al.  Tackling the Correspondence Problem - Closed-Form Solution for Gesture Imitation by a Humanoid's Upper Body , 2013, AMT.

[7]  Carlos Balaguer,et al.  TEO: FULL-SIZE HUMANOID ROBOT DESIGN POWERED BY A FUEL CELL SYSTEM , 2012, Cybern. Syst..

[8]  Stefan Schaal,et al.  2008 Special Issue: Reinforcement learning of motor skills with policy gradients , 2008 .

[9]  Brett Browning,et al.  A survey of robot learning from demonstration , 2009, Robotics Auton. Syst..

[10]  Darwin G. Caldwell,et al.  Learning and Reproduction of Gestures by Imitation , 2010, IEEE Robotics & Automation Magazine.

[11]  Oliver Kroemer,et al.  Towards Robot Skill Learning: From Simple Skills to Table Tennis , 2013, ECML/PKDD.

[12]  Jun Nakanishi,et al.  Movement imitation with nonlinear dynamical systems in humanoid robots , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[13]  Jan Peters,et al.  Nonamemanuscript No. (will be inserted by the editor) Reinforcement Learning to Adjust Parametrized Motor Primitives to , 2011 .

[14]  Kerstin Dautenhahn,et al.  Of hummingbirds and helicopters: An algebraic framework for interdisciplinary studies of imitation a , 2000 .

[15]  Betty J. Mohler,et al.  Learning perceptual coupling for motor primitives , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[16]  Oliver Kroemer,et al.  Learning to select and generalize striking movements in robot table tennis , 2012, AAAI Fall Symposium: Robots Learning Interactively from Human Teachers.

[17]  Carlos Balaguer,et al.  Action effect generalization, recognition and execution through Continuous Goal-Directed Actions , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[18]  Stefan Schaal,et al.  http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained , 2007 .