Top-down based saliency model in traffic driving environment

Traffic driving environment is a complex and dynamically changing scene. During driving, drivers always focus their attention on the most important and saliency areas or targets. Traffic saliency detection is an important application area of computer vision, which could be used to support autonomous driving, traffic sign detection, driving training, and car collision warning, etc. At present, most saliency approaches are based on bottom-up computation which does not consider the top-down control and cannot match the actual traffic saliency in drivers' eyes. In this paper, by carefully analyzing the eye tracking data of 40 subjects who were non-drivers and drivers when viewing 100 traffic images, we found that the drivers' attention was mostly concentrated on the front of road. We proposed that the vanishing point of road can be regarded as top-down guidance in the traffic saliency model. Subsequently, we gave the framework of a bottom- up and top-down combined traffic saliency model, and the results showed that our method can effectively simulate the attentive areas in driving environment.

[1]  Pietro Perona,et al.  Graph-Based Visual Saliency , 2006, NIPS.

[2]  Jean Ponce,et al.  Vanishing point detection for road detection , 2009, CVPR.

[3]  T. Foulsham,et al.  What can saliency models predict about eye movements? Spatial and sequential aspects of fixations during encoding and recognition. , 2008, Journal of vision.

[4]  Tim K Marks,et al.  SUN: A Bayesian framework for saliency using natural statistics. , 2008, Journal of vision.

[5]  D. Lord,et al.  Effect of Driving Environment on Drivers’ Eye Movements: Re-Analyzing Previously Collected Eye-Tracker Data , 2010 .

[6]  Hui Kong,et al.  Generalizing Laplacian of Gaussian Filters for Vanishing-Point Detection , 2013, IEEE Transactions on Intelligent Transportation Systems.

[7]  P. Perona,et al.  Objects predict fixations better than early saliency. , 2008, Journal of vision.

[8]  C. Koch,et al.  A saliency-based search mechanism for overt and covert shifts of visual attention , 2000, Vision Research.

[9]  Antonio Torralba,et al.  Top-down control of visual attention in object detection , 2003, Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429).

[10]  Frédo Durand,et al.  Learning to predict where humans look , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[11]  A. Treisman,et al.  A feature-integration theory of attention , 1980, Cognitive Psychology.

[12]  Liqing Zhang,et al.  Saliency Detection: A Spectral Residual Approach , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[13]  Liqing Zhang,et al.  Dynamic visual attention: searching for coding length increments , 2008, NIPS.

[14]  C. Koch,et al.  Computational modelling of visual attention , 2001, Nature Reviews Neuroscience.

[15]  Jean Ponce,et al.  General Road Detection From a Single Image , 2010, IEEE Transactions on Image Processing.

[16]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[17]  S Ullman,et al.  Shifts in selective visual attention: towards the underlying neural circuitry. , 1985, Human neurobiology.

[18]  John K. Tsotsos,et al.  Saliency Based on Information Maximization , 2005, NIPS.

[19]  Thomas Serre,et al.  Modeling feature sharing between object detection and top-down attention , 2005 .

[20]  Derrick J. Parkhurst,et al.  Modeling the role of salience in the allocation of overt visual attention , 2002, Vision Research.

[21]  Christof Koch,et al.  Learning a saliency map using fixated locations in natural scenes. , 2011, Journal of vision.