Object recognition using laser range finder and machine learning techniques

In recent years, computer vision has been widely used on industrial environments, allowing robots to perform important tasks like quality control, inspection and recognition. Vision systems are typically used to determine the position and orientation of objects in the workstation, enabling them to be transported and assembled by a robotic cell (e.g. industrial manipulator). These systems commonly resort to CCD (Charge-Coupled Device) Cameras fixed and located in a particular work area or attached directly to the robotic arm (eye-in-hand vision system). Although it is a valid approach, the performance of these vision systems is directly influenced by the industrial environment lighting. Taking all these into consideration, a new approach is proposed for eye-on-hand systems, where the use of cameras will be replaced by the 2D Laser Range Finder (LRF). The LRF will be attached to a robotic manipulator, which executes a pre-defined path to produce grayscale images of the workstation. With this technique the environment lighting interference is minimized resulting in a more reliable and robust computer vision system. After the grayscale image is created, this work focuses on the recognition and classification of different objects using inherent features (based on the invariant moments of Hu) with the most well-known machine learning models: k-Nearest Neighbor (kNN), Neural Networks (NNs) and Support Vector Machines (SVMs). In order to achieve a good performance for each classification model, a wrapper method is used to select one good subset of features, as well as an assessment model technique called K-fold cross-validation to adjust the parameters of the classifiers. The performance of the models is also compared, achieving performances of 83.5% for kNN, 95.5% for the NN and 98.9% for the SVM (generalized accuracy). These high performances are related with the feature selection algorithm based on the simulated annealing heuristic, and the model assessment (k-fold cross-validation). It makes possible to identify the most important features in the recognition process, as well as the adjustment of the best parameters for the machine learning models, increasing the classification ratio of the work objects present in the robot's environment. Graphical abstractDisplay Omitted Highlights? This paper analyses the performance of laser range finder for object recognition. ? Comparison between laser range finder and CCD camera approach. ? Object classification performed by using machine learning techniques. ? The proposed technique proves to be superior in terms of robustness and reliability.

[1]  Heinrich Niemann,et al.  Neural networks for appearance-based 3-D object recognition , 2003, Neurocomputing.

[2]  Gerd Hirzinger,et al.  Active self-calibration of hand-mounted laser range finders , 1998, IEEE Trans. Robotics Autom..

[3]  T. I. James Tsay,et al.  Material handling of a mobile manipulator using an eye-in-hand vision system , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Geoff A. W. West,et al.  A relational learning method for pattern and object recognition , 1999, Image Vis. Comput..

[5]  Pinhas Ben-Tzvi,et al.  Extraction of 3D images using pitch-actuated 2D laser range finder for robotic vision , 2010, 2010 IEEE International Workshop on Robotic and Sensors Environments.

[6]  P. Mengel,et al.  3D object recognition system using laser range finder , 1991, Fifth International Conference on Advanced Robotics 'Robots in Unstructured Environments.

[7]  Nasser M. Nasrabadi,et al.  Pattern Recognition and Machine Learning , 2006, Technometrics.

[8]  Sylvain Arlot,et al.  A survey of cross-validation procedures for model selection , 2009, 0907.4728.

[9]  Jianwei Zhang,et al.  3D scene reconstruction based on a moving 2D laser range finder for service-robots , 2009, 2009 IEEE International Conference on Robotics and Biomimetics (ROBIO).

[10]  Robert Tibshirani,et al.  The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd Edition , 2001, Springer Series in Statistics.

[11]  Stavros Paschalakis,et al.  Pattern recognition in grey level images using moment based invariant features , 1999 .

[12]  Ren C. Luo,et al.  3D object recognition using a mobile laser range finder , 1990, EEE International Workshop on Intelligent Robots and Systems, Towards a New Frontier of Applications.

[13]  Abdullah Ferikoglu,et al.  Development of a vision based object classification system for an industrial robotic manipulator , 2001, ICECS 2001. 8th IEEE International Conference on Electronics, Circuits and Systems (Cat. No.01EX483).

[14]  Christopher M. Bishop,et al.  Pattern Recognition and Machine Learning (Information Science and Statistics) , 2006 .

[15]  Eugénio Oliveira,et al.  Model based 3D object recognition using an accurate laser range finder , 1993, Proceedings of IECON '93 - 19th Annual Conference of IEEE Industrial Electronics.

[16]  George Papadourakis,et al.  Object recognition using invariant object boundary representations and neural network models , 1992, Pattern Recognit..

[17]  J. Flusser,et al.  Moments and Moment Invariants in Pattern Recognition , 2009 .

[18]  Jennifer G. Dy,et al.  From Transformation-Based Dimensionality Reduction to Feature Selection , 2010, ICML.

[19]  Huan Liu,et al.  Advancing feature selection research , 2010 .

[20]  Yafei Zhang,et al.  Learning Filters for Object Recognition with Linear Support Vector Machine , 2011 .