A Learning Approach for Robotic Grasp Selection in Open-Ended Domains

Enabling a robot to grasp unknown objects is still an ongoing challenge in robotics. The main problem is to find an appropriate grasp configuration including the position and orientation of the arm relative to the object and fingers configuration. One approach is to recognize an appropriate grasp pose for an object based on one or more grasp demonstrations of the same or other objects. We focus on familiar objects, i.e. unknown objects sharing some features with known objects. The underlying assumption in grasping familiar objects is that, where they are similar to known ones, they may be grasped in a similar way. However finding an object representation and a similarity metric are still the main challenge to transfer grasp experiences to new objects. In this paper, we present an interactive object view recognition approach and a similarity metric to grasp familiar objects. Object view recognition is incrementally capable of recognizing object view labels. The grasp pose learning approach learns a grasp template for a recognized object view using local and global visual features of a demonstrated grasp. In grasp pose recognition, a similarity measure based on Mahalanobis distance is used for grasp template matching. The experimental results reveal the high reliability of the developed template matching approach. We also demonstrate how the proposed grasp learning system can incrementally improve its performance in grasping familiar objects.

[1]  Danica Kragic,et al.  Data-Driven Grasp Synthesis—A Survey , 2013, IEEE Transactions on Robotics.

[2]  Morgan Quigley,et al.  ROS: an open-source Robot Operating System , 2009, ICRA 2009.

[3]  Andrew E. Johnson,et al.  Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes , 1999, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  Matei T. Ciocarlie,et al.  Contact-reactive grasping of objects with partial shape information , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[5]  Peter K. Allen,et al.  Graspit! A versatile simulator for robotic grasping , 2004, IEEE Robotics & Automation Magazine.

[6]  Gi Hyun Lim,et al.  3D object perception and perceptual learning in the RACE project , 2016, Robotics Auton. Syst..

[7]  Henrik I. Christensen,et al.  Automatic grasp planning using shape primitives , 2003, 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422).

[8]  Peter K. Allen,et al.  Grasp Planning via Decomposition Trees , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[9]  Stefan Ulbrich,et al.  OpenGRASP: A Toolkit for Robot Grasping Simulation , 2010, SIMPAR.

[10]  Alexander Herzog,et al.  Template-based learning of grasp selection , 2012, 2012 IEEE International Conference on Robotics and Automation.

[11]  Oliver Kroemer,et al.  A kernel-based approach to direct action perception , 2012, 2012 IEEE International Conference on Robotics and Automation.

[12]  Gi Hyun Lim,et al.  Interactive Open-Ended Learning for 3D Object Recognition: An Approach and Experiments , 2015, J. Intell. Robotic Syst..

[13]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[14]  Radu Bogdan Rusu,et al.  3D is here: Point Cloud Library (PCL) , 2011, 2011 IEEE International Conference on Robotics and Automation.