Pre-attentive and attentive vision module

This paper introduces a new vision module, called PAAV, developed for the cognitive architecture ACT-R. Unlike ACT-R's default vision module that was originally developed for top-down perception only, PAAV was designed to model a wide range of tasks, such as visual search and scene viewing, where pre-attentive bottom-up processes are essential for the validity of a model. PAAV builds on attentive components of the default vision module and incorporates greater support for modeling pre-attentive components of human vision. The module design incorporates the best practices from existing models of vision. The validity of the module was tested on four different tasks.

[1]  K. Rayner Eye movements in reading and information processing: 20 years of research. , 1998, Psychological bulletin.

[2]  Adam Gazzaley,et al.  Neural Mechanisms Underlying the Impact of Visual Distraction on Retrieval of Long-Term Memory , 2010, The Journal of Neuroscience.

[3]  A. Clark Being There: Putting Brain, Body, and World Together Again , 1996 .

[4]  Shaul Hochstein,et al.  Set recognition as a window to perceptual and cognitive processes , 2008, Perception & psychophysics.

[5]  G. d'Ydewalle,et al.  Chronometry of Foveal Information Extraction During Scene Perception , 1995 .

[6]  J. Theeuwes Perceptual selectivity for color and form , 1992, Perception & psychophysics.

[7]  Susan L. Franzel,et al.  Guided search: an alternative to the feature integration model for visual search. , 1989, Journal of experimental psychology. Human perception and performance.

[8]  Niels Taatgen,et al.  Top-down Planning and Bottom-up Perception in a Problem-solving Task , 2011, CogSci.

[9]  M. Land,et al.  The Roles of Vision and Eye Movements in the Control of Activities of Daily Living , 1998, Perception.

[10]  Niels Taatgen,et al.  Set as an Instance of a Real-World Visual-Cognitive Task , 2013, Cogn. Sci..

[11]  John R. Anderson How Can the Human Mind Occur in the Physical Universe , 2007 .

[12]  Frank E. Ritter,et al.  The Rise of Cognitive Architectures , 2007, Integrated Models of Cognitive Systems.

[13]  Helge Ritter,et al.  A three-level model of comparative visual search , 1999 .

[14]  Helge J. Ritter,et al.  Comparative visual search: a difference that makes a difference , 2001, Cogn. Sci..

[15]  John R. Anderson,et al.  How Can the Human Mind Occur , 2007 .

[16]  Dario D. Salvucci An integrated model of eye movements and visual encoding , 2001, Cognitive Systems Research.

[17]  Niels Taatgen,et al.  Proceedings of the Seventh International Conference on Cognitive Modeling , 2006 .

[18]  J. Wolfe,et al.  What attributes guide the deployment of visual attention and how do they do it? , 2004, Nature Reviews Neuroscience.

[19]  David E. Kieras,et al.  The persistent visual store as the locus of fixation memory in visual search tasks , 2011, Cognitive Systems Research.

[20]  David Kieras Modeling visual search of displays of many objects: The role of differential acuity and fixation memory , 2010 .

[21]  G. Keith Humphrey,et al.  Codes and operations in picture matching , 1993, Psychological research.

[22]  Robert W. Kentridge,et al.  Eye movement research : mechanisms, processes and applications , 1995 .

[23]  A. Treisman,et al.  A feature-integration theory of attention , 1980, Cognitive Psychology.

[24]  Jeremy M. Wolfe,et al.  Guided Search 4.0: Current Progress With a Model of Visual Search , 2007, Integrated Models of Cognitive Systems.