Inattentional Blindness in Visual Search

Models of visual saliency normally belong to one of two camps: models such as Experience Guided Search (E-GS), which emphasize top-down guidance based on task features, and models such as Attention as Information Maximisation (AIM), which emphasize the role of bottom-up saliency. In this paper, we show that E-GS and AIM are structurally similar and can be unified to create a general model of visual search which includes a generic prior over potential non-task related objects. We demonstrate that this model displays inattentional blindness, and that blindness can be modulated by adjusting the relative precisions of several terms within the model. At the same time, our model correctly accounts for a series of classical visual search results.

[1]  J. Wolfe,et al.  Five factors that guide attention in visual search , 2017, Nature Human Behaviour.

[2]  Ken Nakayama,et al.  Serial and parallel processing of visual feature conjunctions , 1986, Nature.

[3]  Antonio Torralba,et al.  Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. , 2006, Psychological review.

[4]  D. Simons Attentional capture and inattentional blindness , 2000, Trends in Cognitive Sciences.

[5]  John K. Tsotsos,et al.  Saliency, attention, and visual search: an information theoretic approach. , 2009, Journal of vision.

[6]  Jean-Franois Cardoso High-Order Contrasts for Independent Component Analysis , 1999, Neural Computation.

[7]  Richard Ford,et al.  How Not to Be Seen , 2007, IEEE Security & Privacy.

[8]  C. Chabris,et al.  Gorillas in Our Midst: Sustained Inattentional Blindness for Dynamic Events , 1999, Perception.

[9]  Julia Hockenmaier,et al.  Sentence-Based Image Description with Scalable, Explicit Models , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops.

[10]  Michael C. Mozer,et al.  Experience-Guided Search: A Theory of Attentional Control , 2007, NIPS.

[11]  C. Koch,et al.  Computational modelling of visual attention , 2001, Nature Reviews Neuroscience.

[12]  Shenmin Zhang,et al.  What do saliency models predict? , 2014, Journal of vision.

[13]  Steven B. Most,et al.  What you see is what you set: sustained inattentional blindness and the capture of awareness. , 2005, Psychological review.

[14]  Jochen Braun It's Great But Not Necessarily About Attention , 2001 .

[15]  Laurent Itti,et al.  Top-down attention selection is fine grained. , 2006, Journal of vision.

[16]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[17]  K. Cave The FeatureGate model of visual selection , 1999, Psychological research.

[18]  J. Wolfe Inattentional Amnesia , 2000 .

[19]  D. Simons,et al.  The Influence of Attention Set, Working Memory Capacity, and Expectations on Inattentional Blindness , 2016, Perception.

[20]  J. Wolfe,et al.  Guided Search 2.0 A revised model of visual search , 1994, Psychonomic bulletin & review.

[21]  M. Koivisto,et al.  The effects of perceptual load on semantic processing under inattention , 2009, Psychonomic bulletin & review.

[22]  Antonio Torralba,et al.  Context models and out-of-context objects , 2012, Pattern Recognit. Lett..

[23]  Karl J. Friston,et al.  Attention, Uncertainty, and Free-Energy , 2010, Front. Hum. Neurosci..

[24]  Steven B. Most,et al.  How not to be Seen: The Contribution of Similarity and Selective Ignoring to Sustained Inattentional Blindness , 2001, Psychological science.

[25]  Daniel J. Simons,et al.  Inattentional blindness , 2007, Scholarpedia.

[26]  John H. R. Maunsell,et al.  Feature-based attention in visual cortex , 2006, Trends in Neurosciences.