Real-time lane marker detection using template matching with RGB-D camera

This paper addresses the problem of lane detection which is fundamental for self-driving vehicles. Our approach exploits both colour and depth information recorded by a single RGB-D camera to better deal with negative factors such as lighting conditions and lane-like objects. In the approach, colour and depth images are first converted to a half-binary format and a 2D matrix of 3D points. They are then used as the inputs of template matching and geometric feature extraction processes to form a response map so that its values represent the probability of pixels being lane markers. To further improve the results, the template and lane surfaces are finally refined by principal component analysis and lane model fitting techniques. A number of experiments have been conducted on both synthetic and real datasets. The result shows that the proposed approach can effectively eliminate unwanted noise to accurately detect lane markers in various scenarios. Moreover, the processing speed of 20 frames per second under hardware configuration of a popular laptop computer allows the proposed algorithm to be implemented for real-time autonomous driving applications.

[1]  Antonio M. López,et al.  The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Farhad Samadzadegan,et al.  Automatic Lane Detection in Image Sequences for Vision-based Navigation Purposes , 2006 .

[3]  T. Kanade,et al.  Fast and accurate computation of surface normals from range images , 2011, 2011 IEEE International Conference on Robotics and Automation.

[4]  Marc Levoy,et al.  A volumetric method for building complex models from range images , 1996, SIGGRAPH.

[5]  Vincent Lepetit,et al.  Multimodal templates for real-time detection of texture-less objects in heavily cluttered scenes , 2011, 2011 International Conference on Computer Vision.

[6]  Minh-Trien Pham,et al.  Image segmentation based on histogram of depth and an application in driver distraction detection , 2014, 2014 13th International Conference on Control Automation Robotics & Vision (ICARCV).

[7]  Ling Shao,et al.  Enhanced Computer Vision With Microsoft Kinect Sensor: A Review , 2013, IEEE Transactions on Cybernetics.

[8]  Andreas Geiger,et al.  Computer Vision for Autonomous Vehicles: Problems, Datasets and State-of-the-Art , 2017, Found. Trends Comput. Graph. Vis..

[9]  Mohan M. Trivedi,et al.  Video-based lane estimation and tracking for driver assistance: survey, system, and evaluation , 2006, IEEE Transactions on Intelligent Transportation Systems.

[10]  Gernot Riegler,et al.  OctNet: Learning Deep 3D Representations at High Resolutions , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Roland Siegwart,et al.  Kinect v2 for mobile robot navigation: Evaluation and modeling , 2015, 2015 International Conference on Advanced Robotics (ICAR).

[12]  Dieter Fox,et al.  RGB-(D) scene labeling: Features and algorithms , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[13]  Tran Hiep Dinh,et al.  Automatic interpretation of unordered point cloud data for UAV navigation in construction , 2016, 2016 14th International Conference on Control, Automation, Robotics and Vision (ICARCV).

[14]  Luis Salgado,et al.  Robust multiple lane road modeling based on perspective analysis , 2008, 2008 15th IEEE International Conference on Image Processing.

[15]  Monson H. Hayes,et al.  A Template Matching and Ellipse Modeling Approach to Detecting Lane Markers , 2010, ACIVS.

[16]  Ronen Lerner,et al.  Recent progress in road and lane detection: a survey , 2012, Machine Vision and Applications.