A Visual Search Model for In-Vehicle Interface Design

As in-vehicle infotainment systems gain new functionality, their potential to distract drivers increases. Searching for an item on interface is a critical concern because a poorly designed interface that draws drivers’ attention to less important items can extend drivers’ search for items of interest and pull attention away from roadway events. This potential can be assessed in simulator-based experiments, but computational models of driver behavior might enable designers to assess this potential and revise their designs more quickly than if they have to wait weeks to compile human subjects data. One such model, reported in this paper, predicts the sequence of eye fixations of drivers based on a Boolean Map-based Saliency model augmented with top-down feature bias. Comparing the model predictions to empirical data shows that the model can predict search time, especially in cluttered scenes and when a target item is highlighted. We also describe the integration of this model into a web application (http://distraction.engr.wisc.edu/) that can help assess the distraction potential of interface designs.

[1]  C. Koch,et al.  A saliency-based search mechanism for overt and covert shifts of visual attention , 2000, Vision Research.

[2]  John D. Lee,et al.  Secondary task boundaries influence drivers' glance durations , 2015, AutomotiveUI.

[3]  Nicolas Riche,et al.  Saliency and Human Fixations: State-of-the-Art and Study of Comparison Metrics , 2013, 2013 IEEE International Conference on Computer Vision.

[4]  M. Land,et al.  The Roles of Vision and Eye Movements in the Control of Activities of Daily Living , 1998, Perception.

[5]  J. Henderson,et al.  High-level scene perception. , 1999, Annual review of psychology.

[6]  Anne Treisman,et al.  Preattentive processing in vision , 1985, Computer Vision Graphics and Image Processing.

[7]  John D. Lee,et al.  How Dangerous Is Looking Away From the Road? Algorithms Predict Crash Risk From Glance Patterns in Naturalistic Driving , 2012, Hum. Factors.

[8]  John D. Lee,et al.  Error Recovery in Multitasking While Driving , 2016, CHI.

[9]  Laurent Itti,et al.  An Integrated Model of Top-Down and Bottom-Up Attention for Optimizing Detection Speed , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[10]  Michael D. Byrne,et al.  Modeling the Visual Search of Displays: A Revised ACT-R Model of Icon Search Based on Eye-Tracking Data , 2006, Hum. Comput. Interact..

[11]  Robert S. Astur,et al.  Feature-based attentional set as a cause of traffic accidents , 2007 .

[12]  John D. Lee,et al.  A Web-Based Evaluation Tool to Predict Long Eye Glances , 2017 .

[13]  Frédo Durand,et al.  Learning to predict where humans look , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[14]  Omer Tsimhoni,et al.  Visual Demand of Driving and the Execution of Display-Intensive in-Vehicle Tasks , 2001 .

[15]  Joonbum Lee Integrating the saliency map with distract-r to assess driver distraction of vehicle displays , 2014 .

[16]  Stan Sclaroff,et al.  Saliency Detection: A Boolean Map Approach , 2013, 2013 IEEE International Conference on Computer Vision.

[17]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[18]  Harold Pashler,et al.  A Boolean map theory of visual attention. , 2007, Psychological review.