Flexible system of multiple RGB-D sensors for measuring and classifying fruits in agri-food Industry

The productivity of the agri-food sector experiences continuous and growing challenges that make the use of innovative technologies to maintain and even improve their competitiveness a priority. In this context, this paper presents the foundations and validation of a flexible and portable system capable of obtaining 3D measurements and classifying objects based on color and depth images taken from multiple Kinect v1 sensors. The developed system is applied to the selection and classification of fruits, a common activity in the agri-food industry. Being able to obtain complete and accurate information of the environment, as it integrates the depth information obtained from multiple sensors, this system is capable of self-location and self-calibration of the sensors to then start detecting, classifying and measuring fruits in real time. Unlike other systems that use specific set-up or need a previous calibration, it does not require a predetermined positioning of the sensors, so that it can be adapted to different scenarios. The characterization process considers: classification of fruits, estimation of its volume and the number of assets per each kind of fruit. A requirement for the system is that each sensor must partially share its field of view with at least another sensor. The sensors localize themselves by estimating the rotation and translation matrices that allow to transform the coordinate system of one sensor to the other. To achieve this, Iterative Closest Point (ICP) algorithm is used and subsequently validated with a 6 degree of freedom KUKA robotic arm. Also, a method is implemented to estimate the movement of objects based on the Kalman Filter. A relevant contribution of this work is the detailed analysis and propagation of the errors that affect both the proposed methods and hardware. To determine the performance of the proposed system the passage of different types of fruits on a conveyor belt is emulated by a mobile robot carrying a surface where the fruits were placed. Both the perimeter and volume are measured and classified according to the type of fruit. The system was able to distinguish and classify the 95% of fruits and to estimate their volume with a 85% of accuracy in worst cases (fruits whose shape is not symmetrical) and 94% of accuracy in best cases (fruits whose shape is more symmetrical), showing that the proposed approach can become a useful tool in the agri-food industry.

[1]  Bernt Schiele,et al.  3D Object Detection with Multiple Kinects , 2012, ECCV Workshops.

[2]  Shahram Payandeh,et al.  Sensitivity study for object reconstruction using a network of time-of-flight depth sensors , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[3]  Ta-Te Lin,et al.  An automated growth measurement system for leafy vegetables , 2014 .

[4]  Q. Zhang,et al.  Sensors and systems for fruit detection and localization: A review , 2015, Comput. Electron. Agric..

[5]  Hanan Samet,et al.  Efficient Component Labeling of Images of Arbitrary Dimension Represented by Linear Bintrees , 1988, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  Jacob Goldberger,et al.  Obstacle detection in a greenhouse environment using the Kinect sensor , 2015, Comput. Electron. Agric..

[7]  G D Pitt,et al.  Sensors and systems , 1987 .

[8]  Alexandre Escolà,et al.  Discriminating Crop, Weeds and Soil Surface with a Terrestrial LIDAR Sensor , 2013, Sensors.

[9]  J. R. Rosell-Polo,et al.  Advances in Structured Light Sensors Applications in Precision Agriculture and Livestock Farming , 2015 .

[10]  Michael Riis Andersen,et al.  Kinect Depth Sensor Evaluation for Computer Vision Applications , 2012 .

[11]  Philippe Lucidarme,et al.  On the use of depth camera for 3D phenotyping of entire plants , 2012 .

[12]  Siddhartha S. Mehta,et al.  Vision-based control of robotic manipulator for citrus harvesting , 2014 .

[13]  Hanno Scharr,et al.  Modeling leaf growth of rosette plants using infrared stereo image sequences , 2015, Comput. Electron. Agric..

[14]  Daniel Moura,et al.  In-field crop row phenotyping from 3D modeling performed using Structure from Motion , 2015, Comput. Electron. Agric..

[15]  Kourosh Khoshelham,et al.  Accuracy analysis of kinect depth data , 2012 .

[16]  C. Glasbey,et al.  Automatic fruit recognition and counting from multiple images , 2014 .

[17]  Volker Steinhage,et al.  Automated 3D reconstruction of grape cluster architecture from sensor data for efficient phenotyping , 2015, Comput. Electron. Agric..

[18]  Paul J. Besl,et al.  A Method for Registration of 3-D Shapes , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[19]  Fabio Roli,et al.  Real-time Appearance-based Person Re-identification Over Multiple KinectTM Cameras , 2013, VISAPP.

[20]  Elena Mugellini,et al.  Context-Aware 3D Gesture Interaction Based on Multiple Kinects , 2011 .

[21]  Otmar Loffeld,et al.  Depth Camera Technology Comparison and Performance Evaluation , 2012, ICPRAM.