Attention and Visual Search

Selective Tuning (ST) presents a framework for modeling attention and in this work we show how it performs in covert visual search tasks by comparing its performance to human performance. Two implementations of ST have been developed. The Object Recognition Model recognizes and attends to simple objects formed by the conjunction of various features and the Motion Model recognizes and attends to motion patterns. The validity of the Object Recognition Model was first tested by successfully duplicating the results of Nagy and Sanchez. A second experiment was aimed at an evaluation of the model's performance against the observed continuum of search slopes for feature-conjunction searches of varying difficulty. The Motion Model was tested against two experiments dealing with searches in the visual motion domain. A simple odd-man-out search for counter-clockwise rotating octagons among identical clockwise rotating octagons produced linear increase in search time with the increase of set size. The second experiment was similar to one described by Thorton and Gilden. The results from both implementations agreed with the psychophysical data from the simulated experiments. We conclude that ST provides a valid explanatory mechanism for human covert visual search performance, an explanation going far beyond the conventional saliency map based explanations.

[1]  Laurent Itti,et al.  Models of Bottom-up Attention and Saliency , 2005 .

[2]  John K. Tsotsos,et al.  Attending to visual motion , 2005, Comput. Vis. Image Underst..

[3]  S Ullman,et al.  Shifts in selective visual attention: towards the underlying neural circuitry. , 1985, Human neurobiology.

[4]  A. Nagy,et al.  Critical color differences determined with a visual search task. , 1990, Journal of the Optical Society of America. A, Optics and image science.

[5]  David L. Gilden,et al.  Attentional Limitations in the Sensing of Motion Direction , 2001, Cognitive Psychology.

[6]  Gustavo Deco,et al.  Computational neuroscience of vision , 2002 .

[7]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[8]  S Marcelja,et al.  Mathematical description of the responses of simple cortical cells. , 1980, Journal of the Optical Society of America.

[9]  A. Treisman,et al.  A feature-integration theory of attention , 1980, Cognitive Psychology.

[10]  T. Poggio,et al.  Hierarchical models of object recognition in cortex , 1999, Nature Neuroscience.

[11]  J. Duncan,et al.  Visual search and stimulus similarity. , 1989, Psychological review.

[12]  John K. Tsotsos,et al.  Neurobiology of Attention , 2005 .

[13]  J. D. Gould,et al.  Eye-movement parameters and pattern discrimination , 1969 .

[14]  C. Connor,et al.  Shape representation in area V4: position-specific tuning for boundary conformation. , 2001, Journal of neurophysiology.

[15]  N. P. Bichot,et al.  Frontal eye field activity before visual search errors reveals the integration of bottom-up and top-down salience. , 2005, Journal of neurophysiology.

[16]  John K. Tsotsos,et al.  Modeling Visual Attention via Selective Tuning , 1995, Artif. Intell..

[17]  D. J. Felleman,et al.  Distributed hierarchical processing in the primate cerebral cortex. , 1991, Cerebral cortex.