Using spatial constraints for fast set-up of precise pose estimation in an industrial setting

This paper presents a method for high precision visual pose estimation along with a simple setup procedure. Robotics for industrial solutions is a rapidly growing field and these robots require very precise position information to perform manipulations. This is usually accomplished using e.g. fixtures or feeders, both expensive hardware solutions. To enable fast changes in production, more flexible solutions are required, one possibility being visual pose estimation. Although many current pose estimation algorithms show increased performance in terms of recognition rates on public datasets, they do not focus on actual applications, neither in setup complexity or high accuracy during object localization. In contrast, our method focuses on solving a number of specific pose estimation problems in a seamless manner with a simple setup procedure. Our method relies on a number of workcell constraints and employs a novel method for automatically finding stable object poses. In addition, we use an active rendering method for refining the estimated object poses, giving a very fine localization, suitable for robotic manipulation. Experiments with current state-of-the-art 2D algorithms and our method show an average improvement from 9 mm to 0.95 mm uncertainty. The method was also used by the winning team at the 2018 World Robot Summit Assembly Challenge.

[1]  Vincent Lepetit,et al.  BB8: A Scalable, Accurate, Robust to Partial Occlusion Method for Predicting the 3D Poses of Challenging Objects without Using Depth , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[2]  Dirk Kraft,et al.  Rotational Subgroup Voting and Pose Clustering for Robust 3D Object Recognition , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[3]  Dirk Kraft,et al.  Optimizing Sensor Placement: A Mixture Model Framework Using Stable Poses and Sparsely Precomputed Pose Uncertainty Predictions , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[4]  Vincent Lepetit,et al.  Model Based Training, Detection and Pose Estimation of Texture-Less 3D Objects in Heavily Cluttered Scenes , 2012, ACCV.

[5]  Andrej Gams,et al.  Compensating Pose Uncertainties through Appropriate Gripper Finger Cutouts , 2018 .

[6]  Norbert Krüger,et al.  Does Vision Work Well Enough for Industry? , 2018, VISIGRAPP.

[7]  Gary R. Bradski,et al.  ORB: An efficient alternative to SIFT or SURF , 2011, 2011 International Conference on Computer Vision.

[8]  Mohammed Bennamoun,et al.  3D Object Recognition in Cluttered Scenes with Local Surface Features: A Survey , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  Tae-Kyun Kim,et al.  Latent-Class Hough Forests for 3D Object Detection and Pose Estimation , 2014, ECCV.

[10]  Markus Ulrich,et al.  Combining Scale-Space and Similarity-Based Aspect Graphs for Fast 3D Object Recognition , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  Paul J. Besl,et al.  Method for registration of 3-D shapes , 1992, Other Conferences.

[12]  A. S. Rao,et al.  Computing a Statistical Distribution of Stable Poses for a Polyhedron , 1992 .

[13]  Eric Brachmann,et al.  BOP: Benchmark for 6D Object Pose Estimation , 2018, ECCV.

[14]  John F. Canny,et al.  Estimating pose statistics for robotic part feeders , 1996, Proceedings of IEEE International Conference on Robotics and Automation.

[15]  Norbert Krüger,et al.  Bayesian Optimization of 3D Feature Parameters for 6D Pose Estimation , 2019, VISIGRAPP.

[16]  David G. Lowe,et al.  Object recognition from local scale-invariant features , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[17]  Andrew E. Johnson,et al.  Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes , 1999, IEEE Trans. Pattern Anal. Mach. Intell..

[18]  Alberto Pretto,et al.  D ^2 CO: Fast and Robust Registration of 3D Textureless Objects Using the Directional Chamfer Distance , 2015, ICVS.

[19]  Henrik I. Christensen,et al.  3D textureless object detection and tracking: An edge-based approach , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[20]  Carlos Perez-Vidal,et al.  Adaptive Sliding Mode Control for Robotic Surface Treatment Using Force Feedback , 2018, Mechatronics.

[21]  Brian Mirtich,et al.  Fast and Accurate Computation of Polyhedral Mass Properties , 1996, J. Graphics, GPU, & Game Tools.

[22]  Markus Ulrich,et al.  Introducing MVTec ITODD — A Dataset for 3D Object Recognition in Industry , 2017, 2017 IEEE International Conference on Computer Vision Workshops (ICCVW).

[23]  Tae-Kyun Kim,et al.  Recovering 6D Object Pose and Predicting Next-Best-View in the Crowd , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[24]  Norbert Krüger,et al.  A Novel 2.5D Feature Descriptor Compensating for Depth Rotation , 2017, VISIGRAPP.

[25]  Nassir Navab,et al.  SSD-6D: Making RGB-Based 3D Detection and 6D Pose Estimation Great Again , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).