Learning object-specific grasp affordance densities

This paper addresses the issue of learning and representing object grasp affordances, i.e. object-gripper relative configurations that lead to successful grasps. The purpose of grasp affordances is to organize and store the whole knowledge that an agent has about the grasping of an object, in order to facilitate reasoning on grasping solutions and their achievability. The affordance representation consists in a continuous probability density function defined on the 6D gripper pose space - 3D position and orientation -, within an object-relative reference frame. Grasp affordances are initially learned from various sources, e.g. from imitation or from visual cues, leading to grasp hypothesis densities. Grasp densities are attached to a learned 3D visual object model, and pose estimation of the visual model allows a robotic agent to execute samples from a grasp hypothesis density under various object poses. Grasp outcomes are used to learn grasp empirical densities, i.e. grasps that have been confirmed through experience. We show the result of learning grasp hypothesis densities from both imitation and visual cues, and present grasp empirical densities learned from physical experience by a robot.

[1]  Gerd Hirzinger,et al.  Grasping the dice by dicing the grasp , 2003, Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453).

[2]  Michael Isard,et al.  Nonparametric belief propagation , 2010, Commun. ACM.

[3]  Vijay Kumar,et al.  Robotic grasping and contact: a review , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).

[4]  Justus Piater,et al.  Hierarchical Integration of Local 3D Features for Probabilistic Pose Recovery , 2007 .

[5]  D. Hubel,et al.  Anatomical Demonstration of Columns in the Monkey Striate Cortex , 1969, Nature.

[6]  Justus H. Piater,et al.  A Probabilistic Framework for 3D Visual Object Representation , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[7]  A. Stoytchev Toward Learning the Binding Affordances of Objects : A Behavior-Grounded Approach , 2022 .

[8]  Guillaume Bouchard,et al.  Hierarchical part-based visual object categorization , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[9]  Judea Pearl,et al.  Probabilistic reasoning in intelligent systems - networks of plausible inference , 1991, Morgan Kaufmann series in representation and reasoning.

[10]  Markus Lappe,et al.  Biologically Motivated Multi-modal Processing of Visual Primitives , 2003 .

[11]  Danica Kragic,et al.  Birth of the Object: Detection of Objectness and Extraction of Object Shape through Object-Action complexes , 2008, Int. J. Humanoid Robotics.

[12]  Danica Kragic,et al.  Early reactive grasping with second order 3D feature relations , 2007 .

[13]  E. Reed The Ecological Approach to Visual Perception , 1989 .

[14]  James J. Kuffner,et al.  Effective sampling and distance metrics for 3D rigid body path planning , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[15]  S. R. Jammalamadaka,et al.  Directional Statistics, I , 2011 .

[16]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[17]  Markus Vincze,et al.  Efficient 3D Object Detection by Fitting Superquadrics to Range Image Data for Robot's Object Manipulation , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[18]  Danica Kragic,et al.  A strategy for grasping unknown objects based on co-planarity and colour information , 2010, Robotics Auton. Syst..

[19]  Nicolas Pugeault,et al.  Early cognitive vision: feedback mechanisms for the disambiguation of early visual representation , 2008 .

[20]  Justus H. Piater,et al.  Probabilistic Pose Recovery Using Learned Hierarchical Object Models , 2009, ICVW.

[21]  Danica Kragic,et al.  Real-time tracking meets online grasp planning , 2001, Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164).

[22]  C. D. Kemp,et al.  Density Estimation for Statistics and Data Analysis , 1987 .

[23]  Timothy J. Robinson,et al.  Sequential Monte Carlo Methods in Practice , 2003 .

[24]  Danica Kragic,et al.  Minimum volume bounding box decomposition for shape approximation in robot grasping , 2008, 2008 IEEE International Conference on Robotics and Automation.

[25]  Alexander Stoytchev,et al.  Learning the Affordances of Tools Using a Behavior-Grounded Approach , 2006, Towards Affordance-Based Robot Control.

[26]  Daniel P. Huttenlocher,et al.  Efficient matching of pictorial structures , 2000, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).

[27]  Tai Sing Lee,et al.  Hierarchical Bayesian inference in the visual cortex. , 2003, Journal of the Optical Society of America. A, Optics, image science, and vision.

[28]  Henrik I. Christensen,et al.  Automatic grasp planning using shape primitives , 2003, 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422).

[29]  A. Fagg,et al.  Learning Grasp Affordances Through Human Demonstration , 2008 .