Autonomous Learning of Object-specific Grasp Affordance Densities

In this paper, we address the issue of learning and representing object grasp affordances. Our first aim is to organize and memorize, independently of grasp information sources, the whole knowledge that an agent has about the grasping of an object, in order to facilitate reasoning on grasping solutions and their likelihood of success. By grasp affordance, we refer to the the different ways to place a hand or a gripper near an object so that closing the gripper will produce a stable grip. The grasps we consider are parametrized by a 6D gripper pose and a grasp (preshape) type. The gripper pose is composed of a 3D position and a 3D orientation, defined within an object-relative reference frame. We represent the affordance of an object for a certain grasp type through a continuous probability density function defined on the 6D object-relative gripper pose space SE(3), similar to the approach of de Granville et al. [2]. The computational encoding is nonparametric: A density is simply represented by the samples we see from it. The samples supporting a density are called particles; the probabilistic density in a region of space is given by the local density of the particles in that region. The underlying continuous density is accessed by assigning a kernel function to each particle ‐ a technique generally known as kernel density estimation [6].