Compressing Grasping Experience into a Dictionary of Prototypical Grasp-predicting Parts

We present a real-world robotic agent that is capable of transferring grasping strategies across objects that share similar parts. The agent transfers grasps across objects by identifying, from examples provided by a teacher, parts by which objects are often grasped in a similar fashion. It then uses these parts to identify grasping points onto novel objects. Because most human environments make it infeasible to pre-program grasping behaviors for every object the robot might encounter, grasping novel objects is a key issue in human-friendly robotics. Recent approaches to grasping novel objects aim at devising a direct mapping from visual features to grasp parameters. A central question in such approaches is what visual features to use. Some authors have shown that grasps can be computed from local visual features [3]. However, local features suffer from poor geometric resolution, which makes it difficult to accurately compute the 6D pose of a gripper. By contrast, using object parts as features allows robots to compute grasps of high geometric accuracy [1], [2], [4]. We present a method that allows a robot to learn to formulate grasp plans from visual data obtained from a 3D sensor. Our method relies on the identification of prototypical parts by which objects are often grasped. To this end, we provide the robot with means of identifying, from a set of grasp examples, the 3D shape of parts that are recurrently observed within the manipulator during the grasps. Our approach effectively compresses the training data, generating a dictionary of prototypical parts that is an order of magnitude smaller than the training dataset. As prototypical parts are extracted from grasp examples, each of them automatically inherits a grasping strategy that parametrizes (1) the position and orientation of the manipulator with respect to the part, and (2) the finger preshape, i.e., the configuration in which fingers should be set prior to grasping. When a novel object appears, the robot tries to fit the prototypical parts to a snapshot that partially captures the object. The grasp associated to the part that best fits the snapshot can be executed to manipulate the object. A key aspect of our work is that the shape and the spatial extent (or size) of the prototypes generated by our method directly result from

[1]  Danica Kragic,et al.  Generalizing grasps across partly similar objects , 2012, 2012 IEEE International Conference on Robotics and Automation.

[2]  Li Zhang Grasp Evaluation With Graspable Feature Matching , 2010 .

[3]  Alexander Herzog,et al.  Template-based learning of grasp selection , 2012, 2012 IEEE International Conference on Robotics and Automation.

[4]  Ashutosh Saxena,et al.  Robotic Grasping of Novel Objects using Vision , 2008, Int. J. Robotics Res..