Improving object learning through manipulation and robot self-identification

We present a developmental approach that allows a humanoid robot to continuously and incrementally learn entities through interaction with a human partner in a first stage before categorizing these entities into objects, humans or robot parts and using this knowledge to improve objects models by manipulation in a second stage. This approach does not require prior knowledge about the appearance of the robot, the human or the objects. The proposed perceptual system segments the visual space into proto-objects, analyses their appearance, and associates them with physical entities. Entities are then classified based on the mutual information with proprioception and on motion statistics. The ability to discriminate between the robot's parts and a manipulated object then allows to update the object model with newly observed object views during manipulation. We evaluate our system on an iCub robot, showing the independence of the self-identification method on the robot's hands appearances by wearing different colored gloves. The interactive object learning using self-identification shows an improvement in the objects recognition accuracy with respect to learning through observation only.

[1]  C. Kemp,et al.  What Can I Control ? : The Development of Visual Categories for a Robot ’ s Body and the World that it Influences , 2006 .

[2]  I. Sigel Play, dreams and imitation in childhood. , 1953 .

[3]  Justus H. Piater,et al.  Development of Object and Grasping Knowledge by Robot Exploration , 2010, IEEE Transactions on Autonomous Mental Development.

[4]  Oliver Brock,et al.  Interactive Perception of Articulated Objects , 2010, ISER.

[5]  Lorenzo Natale,et al.  Linking Action to Perception in a Humanoid Robot: a Developmental Approach to Grasping , 2004 .

[6]  Andrew Zisserman,et al.  Video Google: a text retrieval approach to object matching in videos , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[7]  Lisa M Oakes,et al.  Learning how actions function: the role of outcomes in infants' representation of events. , 2011, Infant behavior & development.

[8]  Jana Kosecka,et al.  Semantic segmentation of street scenes by superpixel co-occurrence and 3D geometry , 2009, 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops.

[9]  Giorgio Metta,et al.  Active object recognition on a humanoid robot , 2012, 2012 IEEE International Conference on Robotics and Automation.

[10]  Luc Van Gool,et al.  Speeded-Up Robust Features (SURF) , 2008, Comput. Vis. Image Underst..

[11]  David Filliat,et al.  Developmental approach for interactive object discovery , 2012, The 2012 International Joint Conference on Neural Networks (IJCNN).

[12]  Mohamed Chetouani,et al.  Perception and human interaction for developmental learning of objects and affordances , 2012, 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012).

[13]  G. Metta,et al.  Action learning based on developmental body perception , 2013, 2013 IEEE International Conference on Industrial Technology (ICIT).

[14]  Oliver Kroemer,et al.  Maximally informative interaction learning for scene exploration , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[15]  Brian Scassellati,et al.  Motion-based robotic self-recognition , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[16]  Lisa M. Oakes,et al.  Manual object exploration and learning about object features in human infants , 2012, 2012 IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL).

[17]  J. Piaget Play, dreams and imitation in childhood , 1951 .

[18]  Masaki Ogino,et al.  Cognitive Developmental Robotics: A Survey , 2009, IEEE Transactions on Autonomous Mental Development.