Representations for cross-task, cross-object grasp transfer

We address the problem of transferring grasp knowledge across objects and tasks. This means dealing with two important issues: 1) the induction of possible transfers, i.e., whether a given object affords a given task, and 2) the planning of a grasp that will allow the robot to fulfill the task. The induction of object affordances is approached by abstracting the sensory input of an object as a set of attributes that the agent can reason about through similarity and proximity. For grasp execution, we combine a part-based grasp planner with a model of task constraints. The task constraint model indicates areas of the object that the robot can grasp to execute the task. Within these areas, the part-based planner finds a hand placement that is compatible with the object shape. The key contribution is the ability to transfer task parameters across objects while the part-based grasp planner allows for transferring grasp information across tasks. As a result, the robot is able to synthesize plans for previously unobserved task/object combinations. We illustrate our approach with experiments conducted on a real robot.

[1]  Stefano Caselli,et al.  Part-based robot grasp planning from human demonstration , 2011, 2011 IEEE International Conference on Robotics and Automation.

[2]  Danica Kragic,et al.  Sparse summarization of robotic grasping data , 2013, 2013 IEEE International Conference on Robotics and Automation.

[3]  I. Biederman Recognition-by-components: a theory of human image understanding. , 1987, Psychological review.

[4]  Bo Geng,et al.  DAML: Domain Adaptation Metric Learning , 2011, IEEE Transactions on Image Processing.

[5]  Justus H. Piater,et al.  Continuous Surface-Point Distributions for 3D Object Pose Estimation and Recognition , 2010, ACCV.

[6]  Danica Kragic,et al.  From object categories to grasp transfer using probabilistic reasoning , 2012, 2012 IEEE International Conference on Robotics and Automation.

[7]  Maya Cakmak,et al.  Towards grounding concepts for transfer in goal learning from demonstration , 2011, 2011 IEEE International Conference on Development and Learning (ICDL).

[8]  Ying Li,et al.  Data-Driven Grasp Synthesis Using Shape Matching and Task-Based Pruning , 2007, IEEE Transactions on Visualization and Computer Graphics.

[9]  R. Fisher Dispersion on a sphere , 1953, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences.

[10]  Anis Sahbani,et al.  A hybrid approach for grasping 3D objects , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[11]  Mark Steedman,et al.  Object-Action Complexes: Grounded abstractions of sensory-motor processes , 2011, Robotics Auton. Syst..

[12]  Danica Kragic,et al.  Multivariate discretization for Bayesian Network structure learning in robot grasping , 2011, 2011 IEEE International Conference on Robotics and Automation.

[13]  Danica Kragic,et al.  Learning a dictionary of prototypical grasp-predicting parts from grasping experience , 2013, 2013 IEEE International Conference on Robotics and Automation.

[14]  J. Gibson The Ecological Approach to the Visual Perception of Pictures , 1978 .

[15]  Cyrill Stachniss,et al.  Learning manipulation actions from a few demonstrations , 2013, 2013 IEEE International Conference on Robotics and Automation.

[16]  Peter K. Allen,et al.  Semantic grasping: Planning robotic grasps functionally suitable for an object manipulation task , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.