Caging a novel object using multi-task learning method

Caging grasps provide a way to manipulate an object without full immobilization and enable dealing with the pose uncertainties of the object. Most previous works have constructed caging sets by using the geometric models of the object. This work aims to present a learning-based method for caging a novel object only with its image. A caging set is first defined using the constrained region, and a mapping from the image feature to the caging set is then constructed with kernel regression function. Avoiding the collection of large number of samples, a multi-task learning method is developed to build the regression function, where several different caging tasks are trained with a joint model. In order to transfer the caging experience to a new caging task rapidly, shape similarity for caging knowledge transfer is utilized. Thus, given only the shape context for a novel object, the learner is able to accurately predict the caging set through zero-shot learning. The proposed method can be applied to the caging of a target object in a complex real-world environment, for which the user only needs to know the shape feature of the object, without the need for the geometric model. Several experiments prove the validity of our method. (C) 2019 Elsevier B.V. All rights reserved.

[1]  Jianqing Fan,et al.  Local Polynomial Regression: Optimal Kernels and Asymptotic Minimax Efficiency , 1997 .

[2]  Yasuo Kuniyoshi,et al.  A new “grasping by caging” solution by using eigen-shapes and space mapping , 2013, 2013 IEEE International Conference on Robotics and Automation.

[3]  David J. Kriegman,et al.  Let them fall where they may: capture regions of curved 3D objects , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[4]  Danica Kragic,et al.  Caging complex objects with geodesic balls , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[5]  Joel W. Burdick,et al.  Equilateral Three-Finger Caging of Polygonal Objects Using Contact Space Search , 2018, IEEE Transactions on Automation Science and Engineering.

[6]  Yan-Bin Jia,et al.  Grasping deformable planar objects: Squeeze, stick/slip analysis, and energy-based optimalities , 2014, Int. J. Robotics Res..

[7]  Vijay Kumar,et al.  Robotic grasping and contact: a review , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).

[8]  Kalpana C. Jondhale,et al.  Shape matching and object recognition using shape contexts , 2010 .

[9]  Eric Eaton,et al.  Using Task Features for Zero-Shot Knowledge Transfer in Lifelong Learning , 2016, IJCAI.

[10]  Mohan S. Kankanhalli,et al.  Hierarchical Clustering Multi-Task Learning for Joint Human Action Grouping and Recognition , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  Oliver Kroemer,et al.  Learning robot grasping from 3-D images with Markov Random Fields , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[12]  Hal Daumé,et al.  Learning Task Grouping and Overlap in Multi-task Learning , 2012, ICML.

[13]  Honglak Lee,et al.  Deep learning for detecting robotic grasps , 2013, Int. J. Robotics Res..

[14]  Trevor Darrell,et al.  Learning to Detect Visual Grasp Affordance , 2016, IEEE Transactions on Automation Science and Engineering.

[15]  Masamichi Shimosaka,et al.  Design of Distributed End-Effectors for Caging-Specialized Manipulator - (Design Concept and Development of Finger Component) , 2012, ISER.

[16]  Hong Qiao,et al.  Grasping Objects: The Relationship Between the Cage and the Form-Closure Grasp , 2017, IEEE Robotics & Automation Magazine.

[17]  Danica Kragic,et al.  Data-Driven Grasp Synthesis—A Survey , 2013, IEEE Transactions on Robotics.

[18]  Peyman Milanfar,et al.  Robust Kernel Regression for Restoration and Reconstruction of Images from Sparse Noisy Data , 2006, 2006 International Conference on Image Processing.

[19]  A. Frank van der Stappen,et al.  Caging Polygons with Two and Three Fingers , 2006, WAFR.

[20]  Anis Sahbani,et al.  A new strategy combining empirical and analytical approaches for grasping unknown 3D objects , 2010, Robotics Auton. Syst..

[21]  Danica Kragic,et al.  Learning the tactile signatures of prototypical object parts for robust part-based grasping of novel objects , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[22]  Siddhartha S. Srinivasa,et al.  Manipulation planning with caging grasps , 2008, Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots.

[23]  Weng-Keen Wong,et al.  Evaluating the efficacy of grasp metrics for utilization in a Gaussian Process-based grasp predictor , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[24]  Bin Chen,et al.  Computation of Caging Grasps of Objects using Multi-Task Learning Method , 2018, 2018 3rd International Conference on Advanced Robotics and Mechatronics (ICARM).

[25]  Peyman Milanfar,et al.  Kernel Regression for Image Processing and Reconstruction , 2007, IEEE Transactions on Image Processing.

[26]  Stefano Carpin,et al.  A Learning Method to Determine How to Approach an Unknown Object to be Grasped , 2011, Int. J. Humanoid Robotics.

[27]  Ashutosh Saxena,et al.  Robotic Grasping of Novel Objects using Vision , 2008, Int. J. Robotics Res..

[28]  Rui Fukui,et al.  Efficient Planar Caging Test Using Space Mapping , 2018, IEEE Transactions on Automation Science and Engineering.

[29]  Alberto Rodriguez,et al.  From caging to grasping , 2011, Int. J. Robotics Res..