Lane Mark and Drivable Area Detection Using a Novel Instance Segmentation Scheme

This paper presents a novel instance segmentation scheme for lane mark and drivable area detection simultaneously, which combines an evolved fully convolutional networks (FCNs) and inverse perspective mapping (IPM) technology. Numerous existing methods make an attempt to find out road surface using lane mark, thereby deriving to the drivable area. However, traditional lane detection methods are often difficult to be performed with a high detection rate in adverse weather conditions or complex road environments, such as rain, fog, varying illumination conditions, bad conditions of road paintings, and types of lane marks such as solid lines, double lines, broken lines, tempered glass road stud (Catseye), and so on. Hence, we present a supervised learning approach for simultaneously detecting lane mark and drivable area utilizing our evolved FCNs and top-view image through IPM. Firstly, an evolved FCNs is utilized to classify each pixel in the image so that simultaneously achieve the precise contour and further obtain object classification results to segment the road region elaborately. Since the FCNs results that we can only understand the type of each pixel; in addition, some pixels of clustering of lane lines are only partially recognized. Therefore, to extrapolate the line to cover full lane line length, a lane mark fitting algorithm based on IPM technology is utilized by identifying all pixels that lie on them. Thus, the road lane model can be obtained efficiently. This proposed algorithms not only recognize the lane mark and drivable area, even obstacle detection but also help to achieve its goal of autopilot. Finally, the extensive experiment results and quantitative analysis of the real-world scenes in the general road environment and published database, such as Cityscapes, are used to demonstrate the significant contribution of our proposed approach.