Self-supervised learning of tool affordances from 3D tool representation through parallel SOM mapping

Future humanoid robots will be expected to carry out a wide range of tasks for which they had not been originally equipped by learning new skills and adapting to their environment. A crucial requirement towards that goal is to be able to take advantage of external elements as tools to perform tasks for which their own manipulators are insufficient; the ability to autonomously learn how to use tools will render robots far more versatile and simpler to design. Motivated by this prospect, this paper proposes and evaluates an approach to allow robots to learn tool affordances based on their 3D geometry. To this end, we apply tool-pose descriptors to represent tools combined with the way in which they are grasped, and affordance vectors to represent the effect tool-poses achieve in function of the action performed. This way, tool affordance learning consists in determining the mapping between these 2 representations, which is achieved in 2 steps. First, the dimensionality of both representations is reduced by unsupervisedly mapping them onto respective Self-Organizing Maps (SOMs). Then, the mapping between the neurons in the tool-pose SOM and the neurons in the affordance SOM for pairs of tool-poses and their corresponding affordance vectors, respectively, is learned with a neural based regression model. This method enables the robot to accurately predict the effect of its actions using tools, and thus to select the best action for a given goal, even with tools not seen on the learning phase.

[1]  Donald F. Specht,et al.  A general regression neural network , 1991, IEEE Trans. Neural Networks.

[2]  Giorgio Metta,et al.  Multi-model approach based on 3D functional features for tool affordance learning in robotics , 2015, 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids).

[3]  Angelo Cangelosi,et al.  An open-source simulator for cognitive robotics research: the prototype of the iCub humanoid robot simulator , 2008, PerMIS.

[4]  Giorgio Metta,et al.  Self-supervised learning of grasp dependent tool affordances on the iCub Humanoid robot , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[5]  Tetsunari Inamura,et al.  Bayesian learning of tool affordances based on generalization of functional feature to estimate effects of unseen tools , 2013, Artificial Life and Robotics.

[6]  Atabak Dehban,et al.  Denoising auto-encoders for learning of objects and tools affordances in continuous space , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[7]  Teuvo Kohonen,et al.  The self-organizing map , 1990, Neurocomputing.

[8]  Manuel Lopes,et al.  Learning Object Affordances: From Sensory--Motor Coordination to Imitation , 2008, IEEE Transactions on Robotics.

[9]  Yiannis Aloimonos,et al.  Affordance detection of tool parts from geometric features , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[10]  Alexandre Bernardino,et al.  Learning visual affordances of objects and tools through autonomous robot exploration , 2014, 2014 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC).

[11]  Giulio Sandini,et al.  The iCub humanoid robot: An open-systems platform for research in cognitive development , 2010, Neural Networks.

[12]  Emre Ugur,et al.  Goal emulation and planning in perceptual space using learned affordances , 2011, Robotics Auton. Syst..

[13]  Radu Bogdan Rusu,et al.  3D is here: Point Cloud Library (PCL) , 2011, 2011 IEEE International Conference on Robotics and Automation.

[14]  Ales Ude,et al.  Self-Supervised Online Learning of Basic Object Push Affordances , 2015 .

[15]  Claude Sammut,et al.  Tool Use Learning in Robots , 2011, AAAI Fall Symposium: Advances in Cognitive Systems.

[16]  Esa Alhoniemi,et al.  Self-organizing map in Matlab: the SOM Toolbox , 1999 .

[17]  Danijel Skocaj,et al.  Self-supervised cross-modal online learning of basic object affordances for developmental robotic systems , 2010, 2010 IEEE International Conference on Robotics and Automation.

[18]  Alexander Stoytchev,et al.  Behavior-Grounded Representation of Tool Affordances , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[19]  Alexandre Bernardino,et al.  Gaussian mixture models for affordance learning using Bayesian Networks , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[20]  Alexandre Bernardino,et al.  Learning intermediate object affordances: Towards the development of a tool concept , 2014, 4th International Conference on Development and Learning and on Epigenetic Robotics.

[21]  Giulio Sandini,et al.  Learning about objects through action - initial steps towards artificial cognition , 2003, 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422).

[22]  G. Metta,et al.  Exploring affordances and tool use on the iCub , 2013, 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids).

[23]  J. Sinapov,et al.  Detecting the functional similarities between tools using a hierarchical representation of outcomes , 2008, 2008 7th IEEE International Conference on Development and Learning.

[24]  Manuel Lopes,et al.  Modeling affordances using Bayesian networks , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[25]  William K. Durfee,et al.  IEEE/RSJ/GI International Conference on Intelligent Robots and Systems , 1994 .

[26]  Frank Guerin,et al.  A model-based approach to finding substitute tools in 3D vision data , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[27]  Berthold K. P. Horn Extended Gaussian images , 1984, Proceedings of the IEEE.