Vision-Based Lane Detection and Lane-Marking Model Inference: A Three-Step Deep Learning Approach

In many advanced driver-assistance systems (ADAS), lane detection is often necessary. Vision-based lane detection is popular because of its cost efficiency, but it can be easily affected by illumination changes, especially abrupt ones. Moreover, since most camera systems have a very limited angle of view (AOV), a single camera ADAS can only perceive a portion of a highly curved road. This introduces another challenge to ADAS when fitting lane models. In this paper, we propose a method for lane model inference, which uses one of the two lane-markings if there is only one lane-marking can be seen; or even, using lane-marking models from previous moments if there are no lane-markings to be seen at the current moment. In addition, we also propose using deep neural networks (DNN) to reduce noise at feature extraction stage. We use two DNNs in our method: a YOLO network for detecting an removing vehicles from images; a CPN network for detecting road surfaces in order to remove noises that are not on road surfaces. We tested our method on a video in which the roads are mostly curved and the lighting conditions can change very fast. We use the distances between our lane-marking models and the ground truth to evaluate our method. We see some big improvements in scenarios where the scene suddenly becomes very bright and where the road has a very high curvature.

[1]  Mohan M. Trivedi,et al.  Lane Tracking with Omnidirectional Cameras: Algorithms and Evaluation , 2007, EURASIP J. Embed. Syst..

[2]  Sungshin Kim,et al.  Color recognition of landmarks using FIS and CIE LAB , 2016, 2016 International Conference on Fuzzy Theory and Its Applications (iFuzzy).

[3]  Jae Wook Jeon,et al.  Near real-time ego-lane detection in highway and urban streets , 2016, 2016 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia).

[4]  Mirko Meuter,et al.  A novel approach to lane detection and tracking , 2009, 2009 12th International IEEE Conference on Intelligent Transportation Systems.

[5]  Yiping Yang,et al.  Scene Text Extraction Based on HSL , 2008, 2008 International Symposium on Computer Science and Computational Technology.

[6]  Luis Salgado,et al.  Robust multiple lane road modeling based on perspective analysis , 2008, 2008 15th IEEE International Conference on Image Processing.

[7]  Yanpeng Cao,et al.  Vehicle Ego-Motion Estimation by using Pulse-Coupled Neural Network , 2007, International Machine Vision and Image Processing Conference (IMVIP 2007).

[8]  Seung-Woo Seo,et al.  Multi-lane detection based on accurate geometric lane estimation in highway scenarios , 2014, 2014 IEEE Intelligent Vehicles Symposium Proceedings.

[9]  Mohan M. Trivedi,et al.  This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 1 Integrated Lane and Vehicle Detection, Localization, , 2022 .

[10]  Joachim Denzler,et al.  Convolutional Patch Networks with Spatial Prior for Road Detection and Urban Scene Understanding , 2015, VISAPP.

[11]  Mohan M. Trivedi,et al.  Video-based lane estimation and tracking for driver assistance: survey, system, and evaluation , 2006, IEEE Transactions on Intelligent Transportation Systems.

[12]  ZuWhan Kim,et al.  Robust Lane Detection and Tracking in Challenging Scenarios , 2008, IEEE Transactions on Intelligent Transportation Systems.

[13]  Ali Farhadi,et al.  You Only Look Once: Unified, Real-Time Object Detection , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  Pierre Charbonnier,et al.  Evaluation of Road Marking Feature Extraction , 2008, 2008 11th International IEEE Conference on Intelligent Transportation Systems.