Teach it Yourself - Fast Modeling of Industrial Objects for 6D Pose Estimation

In this paper, we present a vision system that allows a human to create new 3D models of novel industrial parts by placing the part in two different positions in the scene. The two shot modeling framework generates models with a precision that allows the model to be used for 6D pose estimation without loss in pose accuracy. We quantitatively show that our modeling framework reconstructs noisy but adequate object models with a mean RMS error at 2.7i¾?mm, a mean standard deviation at 0.025i¾?mm and a completeness of 70.3i¾?% over all 14 reconstructed models, compared to the ground truth CAD models. In addition, the models are applied in a pose estimation application, evaluated with 37 different scenes with 61 unique object poses. The pose estimation results show a mean translation error on 4.97i¾?mm and a mean rotation error on 3.38 degrees.

[1]  Justus H. Piater,et al.  Development of Object and Grasping Knowledge by Robot Exploration , 2010, IEEE Transactions on Autonomous Mental Development.

[2]  Gérard G. Medioni,et al.  Object modeling by registration of multiple range images , 1991, Proceedings. 1991 IEEE International Conference on Robotics and Automation.

[3]  Dieter Fox,et al.  Manipulator and object tracking for in-hand 3D object modeling , 2011, Int. J. Robotics Res..

[4]  Stefano Caselli,et al.  Perception and Grasping of Object Parts from Active Robot Exploration , 2014, J. Intell. Robotic Syst..

[5]  Markus Vincze,et al.  Interactive object modelling based on piecewise planar surface patches☆ , 2013, Comput. Vis. Image Underst..

[6]  Sudeep Sarkar,et al.  Multi-scale superquadric fitting for efficient shape and pose recovery of unknown objects , 2013, 2013 IEEE International Conference on Robotics and Automation.

[7]  Sonja Stork,et al.  Artificial Cognition in Production Systems , 2011, IEEE Transactions on Automation Science and Engineering.

[8]  Gary M. Bone,et al.  Automated modeling and robotic grasping of unknown three-dimensional objects , 2008, 2008 IEEE International Conference on Robotics and Automation.

[9]  Danica Kragic,et al.  Mind the gap - robotic grasping under incomplete observation , 2011, 2011 IEEE International Conference on Robotics and Automation.

[10]  Dirk Kraft,et al.  Real-time extraction of surface patches with associated uncertainties by means of Kinect cameras , 2012, Journal of Real-Time Image Processing.

[11]  Nico Blodow,et al.  General 3D modelling of novel objects from a single view , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[12]  Nico Blodow,et al.  Fast Point Feature Histograms (FPFH) for 3D registration , 2009, 2009 IEEE International Conference on Robotics and Automation.

[13]  Florentin Wörgötter,et al.  Object Partitioning Using Local Convexity , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[14]  Florentin Wörgötter,et al.  Voxel Cloud Connectivity Segmentation - Supervoxels for Point Clouds , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[15]  Danica Kragic,et al.  Enhancing visual perception of shape through tactile glances , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[16]  Oliver Kroemer,et al.  Point cloud completion using extrusions , 2012, 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012).

[17]  Jeannette Bohg,et al.  Three-dimensional object reconstruction of symmetric objects by fusing visual and tactile sensing , 2014, Int. J. Robotics Res..

[18]  Henrik Gordon Petersen,et al.  Pose estimation using local structure-specific shape and appearance context , 2013, 2013 IEEE International Conference on Robotics and Automation.

[19]  Norbert Krüger,et al.  Multi-view object recognition using view-point invariant shape relations and appearance information , 2013, 2013 IEEE International Conference on Robotics and Automation.

[20]  Holly E. Rushmeier,et al.  The 3D Model Acquisition Pipeline , 2002, Comput. Graph. Forum.