Learn to wipe: A case study of structural bootstrapping from sensorimotor experience

In this paper, we address the question of generative knowledge construction from sensorimotor experience, which is acquired by exploration. We show how actions and their effects on objects, together with perceptual representations of the objects, are used to build generative models which then can be used in internal simulation to predict the outcome of actions. Specifically, the paper presents an experiential cycle for learning association between object properties (softness and height) and action parameters for the wiping task and building generative models from sensorimotor experience resulting from wiping experiments. Object and action are linked to the observed effect to generate training data for learning a non-parametric continuous model using Support Vector Regression. In subsequent iterations, this model is grounded and used to make predictions on the expected effects for novel objects which can be used to constrain the parameter exploration. The cycle and skills have been implemented on the humanoid platform ARMAR-IIIb. Experiments with set of wiping objects differing in softness and height demonstrate efficient learning and adaptation behavior of action of wiping.

[1]  Andrej Gams,et al.  On-line periodic movement and force-profile learning for adaptation to new surfaces , 2010, 2010 10th IEEE-RAS International Conference on Humanoid Robots.

[2]  Tamim Asfour,et al.  Force position control for a pneumatic anthropomorphic hand , 2009, 2009 9th IEEE-RAS International Conference on Humanoid Robots.

[3]  Tamim Asfour,et al.  ARMAR-III: An Integrated Humanoid Platform for Sensory-Motor Control , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[4]  Stefan Schaal,et al.  Encoding of periodic and their transient motions by a single dynamic movement primitive , 2012, 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012).

[5]  Oliver Kroemer,et al.  A kernel-based approach to direct action perception , 2012, 2012 IEEE International Conference on Robotics and Automation.

[6]  James M. Rehg,et al.  Learning stable pushing locations , 2013, 2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL).

[7]  Emre Ugur,et al.  Self-discovery of motor primitives and learning grasp affordances , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[8]  Manuel Lopes,et al.  Learning Object Affordances: From Sensory--Motor Coordination to Imitation , 2008, IEEE Transactions on Robotics.

[9]  Eren Erdal Aksoy,et al.  Learning the semantics of object–action relations by observation , 2011, Int. J. Robotics Res..

[10]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[11]  James M. Rehg,et al.  Affordance Prediction via Learned Object Attributes , 2011 .

[12]  D. Basak,et al.  Support Vector Regression , 2008 .

[13]  Mark Steedman,et al.  Object-Action Complexes: Grounded abstractions of sensory-motor processes , 2011, Robotics Auton. Syst..

[14]  Oliver Kroemer,et al.  Learning grasp affordance densities , 2011, Paladyn J. Behav. Robotics.

[15]  D. Kolb Experiential Learning: Experience as the Source of Learning and Development , 1983 .