Building an Affordances Map With Interactive Perception

Robots need to understand their environment to perform their task. If it is possible to pre-program a visual scene analysis process in closed environments, robots operating in an open environment would benefit from the ability to learn it through their interaction with their environment. This ability furthermore opens the way to the acquisition of affordances maps in which the action capabilities of the robot structure its visual scene understanding. We propose an approach to build such affordances maps by relying on an interactive perception approach and an online classification. In the proposed formalization of affordances, actions and effects are related to visual features, not objects, and they can be combined. We have tested the approach on three action primitives and on a real PR2 robot.

[1]  Oliver Brock,et al.  Interactive Perception: Leveraging Action in Perception and Perception in Action , 2016, IEEE Transactions on Robotics.

[2]  J. Andrew Bagnell,et al.  Perceiving, learning, and exploiting object affordances for autonomous pile manipulation , 2013, Auton. Robots.

[3]  Gaurav S. Sukhatme,et al.  Interactive affordance map building for a robotic task , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[4]  Lydia E. Kavraki,et al.  The Open Motion Planning Library , 2012, IEEE Robotics & Automation Magazine.

[5]  Mark Steedman,et al.  Plans, Affordances, And Combinatory Grammar , 2002 .

[6]  E. Gibson The World Is So Full of a Number of Things: On Specification and Perceptual Learning , 2003 .

[7]  Lucas Paletta,et al.  Perception and Developmental Learning of Affordances in Autonomous Robots , 2007, KI.

[8]  Markus Vincze,et al.  AfRob: The affordance network ontology for robots , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[9]  Yiannis Aloimonos,et al.  Affordance detection of tool parts from geometric features , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[10]  D. Norman The Design of Everyday Things: Revised and Expanded Edition , 2013 .

[11]  S. Süsstrunk,et al.  SLIC Superpixels ? , 2010 .

[12]  Richard J. Duro,et al.  Open-Ended Learning: A Conceptual Framework Based on Representational Redescription , 2018, Front. Neurorobot..

[13]  Manuel Lopes,et al.  Learning grasping affordances from local visual descriptors , 2009, 2009 IEEE 8th International Conference on Development and Learning.

[14]  Gaurav S. Sukhatme,et al.  Semantic labeling of 3D point clouds with object affordance for robot manipulation , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[15]  Danica Kragic,et al.  Early Cognitive Vision as a Frontend for Cognitive Systems , 2010 .

[16]  R. Amant,et al.  Affordances for robots: a brief survey , 2012 .

[17]  Jason Weston,et al.  Fast Kernel Classifiers with Online and Active Learning , 2005, J. Mach. Learn. Res..

[18]  Mark Steedman Formalizing Affordance , 2019, Proceedings of the Twenty-Fourth Annual Conference of the Cognitive Science Society.

[19]  Lea Fleischer,et al.  The Senses Considered As Perceptual Systems , 2016 .

[20]  Maya Cakmak,et al.  To Afford or Not to Afford: A New Formalization of Affordances Toward Affordance-Based Robot Control , 2007, Adapt. Behav..

[21]  Jimmy A. Jørgensen,et al.  Grasping unknown objects using an Early Cognitive Vision system for general scene understanding , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[22]  Sven J. Dickinson,et al.  TurboPixels: Fast Superpixels Using Geometric Flows , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[23]  Nico Blodow,et al.  Fast Point Feature Histograms (FPFH) for 3D registration , 2009, 2009 IEEE International Conference on Robotics and Automation.

[24]  Jin-Hui Zhu,et al.  Affordance Research in Developmental Robotics: A Survey , 2016, IEEE Transactions on Cognitive and Developmental Systems.

[25]  Florentin Wörgötter,et al.  Voxel Cloud Connectivity Segmentation - Supervoxels for Point Clouds , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[26]  Dirk Kraft,et al.  Grasp Learning by Means of Developing Sensorimotor Schemas and Generic World Knowledge , 2011 .

[27]  James J. Gibson,et al.  The Ecological Approach to Visual Perception: Classic Edition , 2014 .

[28]  Peter K. Allen,et al.  Semantic grasping: planning task-specific stable robotic grasps , 2014, Auton. Robots.

[29]  Radu Bogdan Rusu,et al.  3D is here: Point Cloud Library (PCL) , 2011, 2011 IEEE International Conference on Robotics and Automation.

[30]  Stéphane Doncieux,et al.  Bootstrapping Robotic Ecological Perception from a Limited Set of Hypotheses Through Interactive Perception , 2019, ArXiv.

[31]  A. Chemero An Outline of a Theory of Affordances , 2003, How Shall Affordances be Refined? Four Perspectives.

[32]  Justus H. Piater,et al.  Computational models of affordance in robotics: a taxonomy and systematic classification , 2017, Adapt. Behav..

[33]  E. Gibson Perceptual Learning in Development: Some Basic Concepts , 2000 .

[34]  Justus H. Piater,et al.  Development of Object and Grasping Knowledge by Robot Exploration , 2010, IEEE Transactions on Autonomous Mental Development.

[35]  E. Sahin,et al.  Curiosity-driven learning of traversability affordance on a mobile robot , 2007, 2007 IEEE 6th International Conference on Development and Learning.

[36]  Alexander Bierbaum,et al.  Grasp affordances from multi-fingered tactile exploration using dynamic potential fields , 2009, 2009 9th IEEE-RAS International Conference on Humanoid Robots.

[37]  Angelo Cangelosi,et al.  Affordances in Psychology, Neuroscience, and Robotics: A Survey , 2018, IEEE Transactions on Cognitive and Developmental Systems.