Eye Fixation Location Recommendation in Advanced Driver Assistance System

Recent research progress on the approach of visual attention modeling for mediated perception to advanced driver assistance system (ADAS) has drawn the attention of computer and human vision researchers. However, it is still debatable whether the actual driver’s eye fixation locations (EFLs) or the predicted EFLs which are calculated by computational visual attention models (CVAMs) are more reliable for safe driving under real-life driving conditions. We analyzed the suitability of the following two EFLs using ten typical categories of natural driving video clips: the EFLs of human drivers and the EFLs predicted by CVAMs. In the suitability analysis, we used the EFLs confirmed by two experienced drivers as the reference EFLs. We found that both approaches alone are not suitable for safe driving and EFL suitable for safe driving depends on the driving conditions. Based on this finding, we propose a novel strategy for recommending one of the EFLs to the driver for ADAS under predefined 10 real-life driving conditions. We propose to recommend one of the following 3 EFL modes for different driving conditions: driver’s EFL only, CVAM’s EFL only, and interchangeable EFL. In interchangeable EFL mode, driver’s EFL and CVAM’s EFL are interchangeable. The selection of two EFLs is a typical binary classification problem, so we apply support vector machines (SVMs) to solve this problem. We also provide a quantitative evaluation of the classifiers. The performance evaluation of the proposed recommendation method indicates that it is potentially useful to ADAS for future safe driving.

[1]  G. Underwood Eye guidance in reading and scene perception , 1998 .

[2]  K M Heilman,et al.  Hemisphere asymmetry for eye gaze mechanisms. , 1989, Brain : a journal of neurology.

[3]  Nathalie Guyader,et al.  Modelling Spatio-Temporal Saliency to Predict Gaze Direction for Short Videos , 2009, International Journal of Computer Vision.

[4]  David Crundall,et al.  Change blindness in driving scenes , 2009 .

[5]  Frédo Durand,et al.  Learning to predict where humans look , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[6]  K. Guo,et al.  How does image noise affect actual and predicted human gaze allocation in assessing image quality? , 2015, Vision Research.

[7]  David M. W. Powers,et al.  Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation , 2011, ArXiv.

[8]  J. Enns,et al.  What's next? New evidence for prediction in human vision , 2008, Trends in Cognitive Sciences.

[9]  Ali Borji,et al.  Probabilistic learning of task-specific visual attention , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[10]  Alan Kingstone,et al.  Recurrence quantification analysis of eye movements , 2013, Behavior Research Methods.

[11]  Ling Shao,et al.  Fast Automatic Vehicle Annotation for Urban Traffic Surveillance , 2018, IEEE Transactions on Intelligent Transportation Systems.

[12]  Andrew Hollingworth,et al.  The role of attention in binding features in visual working memory , 2010 .

[13]  Umesh Rajashekar,et al.  DOVES: a database of visual eye movements. , 2009, Spatial vision.

[14]  Yoshimichi Tanida,et al.  On Founding the Journal of Visualization , 1998 .

[15]  Luc Van Gool,et al.  The Pascal Visual Object Classes (VOC) Challenge , 2010, International Journal of Computer Vision.

[16]  Stephen Grossberg,et al.  Contour Enhancement, Short Term Memory, and Constancies in Reverberating Neural Networks , 1973 .

[17]  Pietro Perona,et al.  Pedestrian Detection: An Evaluation of the State of the Art , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  Liqing Zhang,et al.  Dynamic visual attention: searching for coding length increments , 2008, NIPS.

[19]  Nilli Lavie,et al.  The role of perceptual load in inattentional blindness , 2007, Cognition.

[20]  David Crundall,et al.  Effects of experience and processing demands on visual information acquisition in drivers , 1998 .

[21]  Krista A. Ehinger,et al.  Modelling search for people in 900 scenes: A combined source model of eye guidance , 2009 .

[22]  Mohan M. Trivedi,et al.  A General Active-Learning Framework for On-Road Vehicle Recognition and Tracking , 2010, IEEE Transactions on Intelligent Transportation Systems.

[23]  A. Weitzenhoffer,et al.  PRACTICE of hypnotism. , 2000, The Medico-legal journal.

[24]  Mohan S. Kankanhalli,et al.  VIP: A Unifying Framework for Computational Eye-Gaze Research , 2013, HBU.

[25]  Ashish D. Nimbarte,et al.  Effect of driving experience on visual behavior and driving performance under different driving conditions , 2012, Cognition, Technology & Work.

[26]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[27]  Ivan V. Bajic,et al.  Eye-Tracking Database for a Set of Standard Video Sequences , 2012, IEEE Transactions on Image Processing.

[28]  Jiawei Xu,et al.  Mimicking visual searching with integrated top down cues and low-level features , 2014, Neurocomputing.

[29]  Masahiro Takei,et al.  Human resource development and visualization , 2009, J. Vis..

[30]  Mubarak Shah,et al.  Visual attention detection in video sequences using spatiotemporal cues , 2006, MM '06.

[31]  Cheng-Lin Liu,et al.  Traffic Sign Detection Using a Cascade Method With Fast Feature Extraction and Saliency Test , 2017, IEEE Transactions on Intelligent Transportation Systems.

[32]  Linda Ng Boyle,et al.  Visual Attention in Driving: The Effects of Cognitive Load and Visual Disruption , 2007, Hum. Factors.

[33]  Nicolas Riche,et al.  Dynamic Saliency Models and Human Attention: A Comparative Study on Videos , 2012, ACCV.

[34]  Jeffrey B. Nyquist,et al.  Spatial and temporal limits of motion perception across variations in speed, eccentricity, and low vision. , 2009, Journal of vision.

[35]  C. Koch,et al.  Faces and text attract gaze independent of the task: Experimental data and computer model. , 2009, Journal of vision.

[36]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[37]  David Crundall,et al.  Driver's visual attention as a function of driving experience and visibility. Using a driving simulator to explore drivers' eye movements in day, night and rain driving. , 2010, Accident; analysis and prevention.

[38]  Jiawei Xu,et al.  What has been missed for predicting human attention in viewing driving clips? , 2017, PeerJ.

[39]  Margaret M. Peden,et al.  World Report on Road Traffic Injury Prevention , 2004 .

[40]  Ian P. Howard,et al.  Perceiving in Depth , 2012 .

[41]  Ling Shao,et al.  What has been missed for real life driving? an inspirational thinking from human innate biases , 2017, 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS).

[42]  Johannes Stallkamp,et al.  Detection of traffic signs in real-world images: The German traffic sign detection benchmark , 2013, The 2013 International Joint Conference on Neural Networks (IJCNN).

[43]  Ian P. Howard,et al.  Perceiving in DepthVolume 1 Basic Mechanisms , 2012 .

[44]  Daniel Mills,et al.  Left gaze bias in humans, rhesus monkeys and domestic dogs , 2009, Animal Cognition.

[45]  Nuno Vasconcelos,et al.  Spatiotemporal Saliency in Dynamic Scenes , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.