Pose estimation of metal workpieces based on RPM-Net for robot grasping from point cloud

Purpose Many metal workpieces have the characteristics of less texture, symmetry and reflectivity, which presents a challenge to existing pose estimation methods. The purpose of this paper is to propose a pose estimation method for grasping metal workpieces by industrial robots. Design/methodology/approach Dual-hypothesis robust point matching registration network (RPM-Net) is proposed to estimate pose from point cloud. The proposed method uses the Point Cloud Library (PCL) to segment workpiece point cloud from scenes and a trained-well robust point matching registration network to estimate pose through dual-hypothesis point cloud registration. Findings In the experiment section, an experimental platform is built, which contains a six-axis industrial robot, a binocular structured-light sensor. A data set that contains three subsets is set up on the experimental platform. After training with the emulation data set, the dual-hypothesis RPM-Net is tested on the experimental data set, and the success rates of the three real data sets are 94.0%, 92.0% and 96.0%, respectively. Originality/value The contributions are as follows: first, dual-hypothesis RPM-Net is proposed which can realize the pose estimation of discrete and less-textured metal workpieces from point cloud, and second, a method of making training data sets is proposed using only CAD models with the visualization algorithm of the PCL.

[1]  Lu Yang,et al.  Analysis on Location Accuracy for the Binocular Stereo Vision System , 2018, IEEE Photonics Journal.

[2]  Vincent Lepetit,et al.  Model Based Training, Detection and Pose Estimation of Texture-Less 3D Objects in Heavily Cluttered Scenes , 2012, ACCV.

[3]  Zi Jian Yew,et al.  RPM-Net: Robust Point Matching Using Learned Features , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Danica Kragic,et al.  Data-Driven Grasp Synthesis—A Survey , 2013, IEEE Transactions on Robotics.

[5]  Homayoun Najjaran,et al.  Detecting 6D Poses of Target Objects From Cluttered Scenes by Learning to Align the Point Cloud Patches With the CAD Models , 2020, IEEE Access.

[6]  Hui Pan,et al.  A closed-form solution to eye-to-hand calibration towards visual grasping , 2014, Ind. Robot.

[7]  Eric Mjolsness,et al.  New Algorithms for 2D and 3D Point Matching: Pose Estimation and Correspondence , 1998, NIPS.

[8]  Kenneth Y. Goldberg,et al.  Cloud-based robot grasping with the google object recognition engine , 2013, 2013 IEEE International Conference on Robotics and Automation.

[9]  Yasuhiro Aoki,et al.  PointNetLK: Robust & Efficient Point Cloud Registration Using PointNet , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Marc Pollefeys,et al.  Multi-Label Semantic 3D Reconstruction Using Voxel Blocks , 2016, 2016 Fourth International Conference on 3D Vision (3DV).

[11]  Vincent Lepetit,et al.  Multimodal templates for real-time detection of texture-less objects in heavily cluttered scenes , 2011, 2011 International Conference on Computer Vision.

[12]  Yongxiang Wu,et al.  Deep instance segmentation and 6D object pose estimation in cluttered scenes for robotic autonomous grasping , 2020, Ind. Robot.

[13]  Paul J. Besl,et al.  Method for registration of 3-D shapes , 1992, Other Conferences.

[14]  Federico Tombari,et al.  SHOT: Unique signatures of histograms for surface and texture description , 2014, Comput. Vis. Image Underst..

[15]  Pascal Fua,et al.  Real-Time Seamless Single Shot 6D Object Pose Prediction , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[16]  Ming Cong,et al.  Human skill integrated motion planning of assembly manipulation for 6R industrial robot , 2019, Ind. Robot.

[17]  Silvio Savarese,et al.  DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Henrik Gordon Petersen,et al.  Pose estimation using local structure-specific shape and appearance context , 2013, 2013 IEEE International Conference on Robotics and Automation.

[19]  David G. Lowe,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004, International Journal of Computer Vision.

[20]  Bolei Zhou,et al.  SegICP: Integrated deep semantic segmentation and pose estimation , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[21]  Yue Wang,et al.  PRNet: Self-Supervised Learning for Partial-to-Partial Registration , 2019, NeurIPS.

[22]  Jin Xu,et al.  3D Object Recognition and Pose Estimation From Point Cloud Using Stably Observed Point Pair Feature , 2020, IEEE Access.

[23]  Luís A. Alexandre,et al.  A comparative evaluation of 3D keypoint detectors in a RGB-D Object Dataset , 2015, 2014 International Conference on Computer Vision Theory and Applications (VISAPP).

[24]  Radu Bogdan Rusu,et al.  3D is here: Point Cloud Library (PCL) , 2011, 2011 IEEE International Conference on Robotics and Automation.