Freespace Optical Flow Modeling for Automated Driving

Optical flow and disparity are two informative visual features for autonomous driving perception. They have been used for a variety of applications, such as obstacle and lane detection. The concept of “U-V-Disparity” has been widely explored in the literature, while its counterpart in optical flow has received relatively little attention. Traditional motion analysis algorithms estimate optical flow by matching correspondences between two successive video frames, which limits the full utilization of environmental information and geometric constraints. Therefore, we propose a novel strategy to model optical flow in the collision-freespace (also referred to as drivable area or simply freespace) for intelligent vehicles, with the full utilization of geometry information in a 3-D driving environment. We provide explicit representations of optical flow and deduce the quadratic relationship between the optical flow component and the vertical coordinate. Through extensive experiments on several public datasets, we demonstrate the high accuracy and robustness of our model. In addition, our proposed freespace optical flow model boasts a diverse array of applications within the realm of automated driving, providing a geometric constraint in freespace detection, vehicle localization, and more. We have made our source code publicly available at https://mias.group/FSOF.

[1]  Andrés Bruhn,et al.  Spring: A High-Resolution High-Detail Dataset and Benchmark for Scene Flow, Optical Flow and Stereo , 2023, 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Hongsheng Li,et al.  FlowFormer++: Masked Cost Volume Autoencoding for Pretraining Optical Flow Estimation , 2023, 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  Peng Yun,et al.  Accurate and Robust Visual Localization System in Large-Scale Appearance-Changing Environments , 2022, IEEE/ASME Transactions on Mechatronics.

[4]  Ioannis Pitas,et al.  Rethinking Road Surface 3-D Reconstruction and Pothole Detection: From Perspective Transformation to Disparity Map Segmentation , 2020, IEEE Transactions on Cybernetics.

[5]  Rui Fan,et al.  CoT-AMFlow: Adaptive Modulation Network with Co-Teaching Strategy for Unsupervised Optical Flow Estimation , 2020, CoRL.

[6]  Rui Fan,et al.  SNE-RoadSeg: Incorporating Surface Normal Information into Semantic Segmentation for Accurate Freespace Detection , 2020, ECCV.

[7]  Ying Tai,et al.  Learning by Analogy: Reliable Supervision From Transformations for Unsupervised Optical Flow Estimation , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Jia Deng,et al.  RAFT: Recurrent All-Pairs Field Transforms for Optical Flow , 2020, ECCV.

[9]  Ming Liu,et al.  Road Damage Detection Based on Unsupervised Disparity Map Segmentation , 2019, IEEE Transactions on Intelligent Transportation Systems.

[10]  I. Pitas,et al.  Pothole Detection Based on Disparity Transformation and Road Surface Modeling , 2019, IEEE Transactions on Image Processing.

[11]  Baoxin Li,et al.  A survey of variational and CNN-based optical flow techniques , 2019, Signal Process. Image Commun..

[12]  Rui Fan,et al.  Road Surface 3D Reconstruction Based on Dense Subpixel Disparity Map Estimation , 2018, IEEE Transactions on Image Processing.

[13]  V. Koltun,et al.  CARLA: An Open Urban Driving Simulator , 2017, CoRL.

[14]  Lei Liu,et al.  Particle swarm optimization algorithm: an overview , 2017, Soft Computing.

[15]  Thomas Brox,et al.  A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[16]  Andreas Geiger,et al.  Object scene flow for autonomous vehicles , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[17]  Thomas Brox,et al.  FlowNet: Learning Optical Flow with Convolutional Networks , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[18]  Naveen Onkarappa,et al.  Optical Flow in Driver Assistance Systems , 2013 .

[19]  Michael J. Black,et al.  A Naturalistic Open Source Movie for Optical Flow Evaluation , 2012, ECCV.

[20]  Andreas Geiger,et al.  Are we ready for autonomous driving? The KITTI vision benchmark suite , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[21]  Richard Szeliski,et al.  A Database and Evaluation Methodology for Optical Flow , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[22]  Keiichi Uchimura,et al.  A complete U-V-disparity study for stereovision based 3D driving environment analysis , 2005, Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05).

[23]  P. Rander,et al.  Three-dimensional scene flow , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[24]  Thomas Brox,et al.  High Accuracy Optical Flow Estimation Based on a Theory for Warping , 2004, ECCV.

[25]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[26]  Berthold K. P. Horn,et al.  Determining Optical Flow , 1981, Other Conferences.

[27]  Jo Yung Wong,et al.  Theory of ground vehicles , 1978 .

[28]  Ning Lv,et al.  Optical flow and scene flow estimation: A survey , 2021, Pattern Recognit..

[29]  Fernando García,et al.  Advanced Driver Assistance System for Road Environments to Improve Safety and Efficiency , 2016 .

[30]  K. Dietmayer,et al.  DATA SYNCHRONIZATION STRATEGIES FOR MULTI-SENSOR FUSION , 2003 .