Depth Based Fruit Detection from Viewer-Based Pose

Seeking to accurately detect and localize fruit of any color in 3D space for selective agrobotical operations, we exploit data given by Time-of-Flight or RGB-D cameras and propose a novel shape-based fruit detector using a fruit pose reference frame relative to the viewer. Surface normals, which are shaped-based local features, are accumulated into bins of different shapes along the reference frame's axes. The used normals are represented using two angles from a viewer-based reference frame, to achieve a representation that is suitable for fruit types that are almost symmetric around an axis (e.g., bell-peppers), without having an effect on fruit types with no axis-symmetry. Results are shown on a particularly challenging pepper dataset.

[1]  Radu Bogdan Rusu,et al.  Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments , 2010, KI - Künstliche Intelligenz.

[2]  M. Monta,et al.  Three-dimensional sensing system for agricultural robots , 2003, Proceedings 2003 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2003).

[3]  Peter P. Ling,et al.  Fast Fruit identification for Robotic Tomato Picker , 2004 .

[4]  Nico Blodow,et al.  CAD-model recognition and 6DOF pose estimation using 3D cues , 2011, 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops).

[5]  Vincent Lepetit,et al.  Gradient Response Maps for Real-Time Detection of Textureless Objects , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  Silvio Savarese,et al.  Accurate Localization of 3D Objects from RGB-D Data Using Segmentation Hypotheses , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[7]  Nathan Silberman,et al.  Indoor scene segmentation using a structured light sensor , 2011, 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops).

[8]  James M. Keller,et al.  Histogram of Oriented Normal Vectors for Object Recognition with a Depth Sensor , 2012, ACCV.

[9]  T. F. Burks,et al.  Current Developments in Automated Citrus Harvesting , 2004 .

[10]  Kai Oliver Arras,et al.  People detection in RGB-D data , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[11]  José Luis Pons Rovira,et al.  Machine Vision and Applications Manuscript-nr. a Vision System Based on a Laser Range--nder Applied to Robotic Fruit Harvesting , 2022 .

[12]  Jonathan T. Barron,et al.  A category-level 3-D object dataset: Putting the Kinect to work , 2011, ICCV Workshops.

[13]  Vincent Lepetit,et al.  Multimodal templates for real-time detection of texture-less objects in heavily cluttered scenes , 2011, 2011 International Conference on Computer Vision.

[14]  Yael Edan,et al.  Robotic melon harvesting , 2000, IEEE Trans. Robotics Autom..

[15]  David A. McAllester,et al.  A discriminatively trained, multiscale, deformable part model , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[16]  Andrew W. Fitzgibbon,et al.  Real-time human pose recognition in parts from single depth images , 2011, CVPR 2011.

[17]  R. C. Harrell,et al.  A fruit-tracking system for robotic harvesting , 2005, Machine Vision and Applications.

[18]  Bill Triggs,et al.  Histograms of oriented gradients for human detection , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[19]  Dieter Fox,et al.  Sparse distance learning for object recognition combining RGB and depth information , 2011, 2011 IEEE International Conference on Robotics and Automation.