Grasping Unknown Objects by Exploiting Complementarity with Robot Hand Geometry

Grasping unknown objects with multi-fingered hands is challenging due to incomplete information regarding scene geometry and the complicated control and planning of robot hands. We propose a method for grasping unknown objects with multi-fingered hands based on shape complementarity between the robot hand and the object. Taking as input a point cloud of the scene we locally perform shape completion and then we search for hand poses and finger configurations that optimize a local shape complementarity metric. We validate the proposed approach in MuJoCo physics engine. Our experiments show that the explicit consideration of shape complementarity of the hand leads to robust grasping of unknown objects.

[1]  Kate Saenko,et al.  High precision grasp pose detection in dense clutter , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[2]  Jeannette Bohg,et al.  Leveraging big data for grasp planning , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[3]  Anis Sahbani,et al.  An overview of 3D object grasp synthesis algorithms , 2012, Robotics Auton. Syst..

[4]  Bo Yang,et al.  3D Object Reconstruction from a Single Depth View with Adversarial Learning , 2017, 2017 IEEE International Conference on Computer Vision Workshops (ICCVW).

[5]  Henrik I. Christensen,et al.  Automatic grasp planning using shape primitives , 2003, 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422).

[6]  Peter K. Allen,et al.  Grasp Planning via Decomposition Trees , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[7]  Vijay Kumar,et al.  Robotic grasping and contact: a review , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).

[8]  Bo Yang,et al.  3D Object Dense Reconstruction from a Single Depth View , 2018, ArXiv.

[9]  Honglak Lee,et al.  Deep learning for detecting robotic grasps , 2013, Int. J. Robotics Res..

[10]  Rüdiger Dillmann,et al.  The KIT object models database: An object model database for object recognition, localization and manipulation in service robotics , 2012, Int. J. Robotics Res..

[11]  Quoc V. Le,et al.  Learning to grasp objects with multiple contact points , 2010, 2010 IEEE International Conference on Robotics and Automation.

[12]  Joseph Redmon,et al.  Real-time grasp detection using convolutional neural networks , 2014, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[13]  Ashutosh Saxena,et al.  Robotic Grasping of Novel Objects using Vision , 2008, Int. J. Robotics Res..

[14]  Xinyu Liu,et al.  Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics , 2017, Robotics: Science and Systems.

[15]  Ashutosh Saxena,et al.  Efficient grasping from RGBD images: Learning using a new rectangle representation , 2011, 2011 IEEE International Conference on Robotics and Automation.

[16]  Masayoshi Tomizuka,et al.  Real-Time Grasp Planning for Multi-Fingered Hands by Finger Splitting , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[17]  Danica Kragic,et al.  Data-Driven Grasp Synthesis—A Survey , 2013, IEEE Transactions on Robotics.

[18]  Oliver Brock,et al.  Grasping unknown objects by exploiting shape adaptability and environmental constraints , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[19]  Sylvain Martel,et al.  Guest Editorial: Special Issue on Nanorobotics , 2014, IEEE Trans. Robotics.

[20]  Martijn Wisse,et al.  Fast C-shape grasping for unknown objects , 2017, 2017 IEEE International Conference on Advanced Intelligent Mechatronics (AIM).