Perception Model for People with Visual Impairments

Scientists from many different disciplines (including physiology, psychology, and engineering) have worked on modelling visual perception. However this field has been less extensively studied in the context of computer science, as most existing perception models work only for very specific domains such as menu searching or icon searching tasks. We are developing a perception model that works for any application. It takes a list of mouse events, a sequence of bitmap images of an interface and locations of different objects in the interface as input, and produces a sequence of eye-movements as output. We have identified a set of features to differentiate among different screen objects and using those features, our model has reproduced the results of previous experiments on visual perception in the context of HCI. It can also simulate the effects of different visual impairments on interaction. In this paper we discuss the design, implementation and two pilot studies to demonstrate the model.

[1]  R. Desimone,et al.  The Role of Neural Mechanisms of Attention in Solving the Binding Problem , 1999, Neuron.

[2]  Erik Nilsen,et al.  PERCEPTUAL-MOTOR CONTROL IN HUMAN-COMPUTER INTERACTION , 1991, SGCH.

[3]  C. Koch,et al.  Computational modelling of visual attention , 2001, Nature Reviews Neuroscience.

[4]  Peter Robinson,et al.  Simulation to predict performance of assistive interfaces , 2007, Assets '07.

[5]  D Marr,et al.  Visual information processing: the structure and creation of visual representations. , 1979, Philosophical transactions of the Royal Society of London. Series B, Biological sciences.

[6]  Nixon,et al.  Feature Extraction & Image Processing , 2008 .

[7]  Jitendra Malik,et al.  Shape matching and object recognition using shape contexts , 2010, 2010 3rd International Conference on Computer Science and Information Technology.

[8]  Anthony J. Hornof,et al.  Cognitive modeling reveals menu search in both random and systematic , 1997, CHI.

[9]  Michael D. Byrne,et al.  Modeling the Visual Search of Displays: A Revised ACT-R Model of Icon Search Based on Eye-Tracking Data , 2006, Hum. Comput. Interact..

[10]  Allen Newell,et al.  The psychology of human-computer interaction , 1983 .

[11]  David E. Kieras,et al.  An Overview of the EPIC Architecture for Cognition and Performance With Application to Human-Computer Interaction , 1997, Hum. Comput. Interact..

[12]  R. Desimone,et al.  Neural mechanisms of spatial selective attention in areas V1, V2, and V4 of macaque visual cortex. , 1997, Journal of neurophysiology.

[13]  M. S. Mayzner,et al.  Cognition And Reality , 1976 .

[14]  Peter Robinson,et al.  Automatic evaluation of assistive interfaces , 2008, IUI '08.

[15]  C. Lebiere,et al.  The Atomic Components of Thought , 1998 .

[16]  Michael D. Byrne,et al.  ACT-R/PM and menu selection: applying a cognitive architecture to HCI , 2001, Int. J. Hum. Comput. Stud..

[17]  David Marr,et al.  Visual Information Processing: The Structure and Creation of Visual Representations , 1980 .

[18]  Michael D. Byrne,et al.  Modeling icon search in ACT-R/PM , 2002, Cognitive Systems Research.

[19]  Dario D. Salvucci An integrated model of eye movements and visual encoding , 2001, Cognitive Systems Research.

[20]  Pj Clarkson,et al.  Inclusive design toolkit , 2007 .

[21]  Frank E. Ritter,et al.  Connecting a Cognitive Model to Dynamic Gaming Environments: Architectural and Image Processing Issues , 2003 .

[22]  Ryan G. Rosandich Intelligent visual inspection : using artificial neural networks , 1997 .

[23]  Ian Walker Peter J. Hampson Peter E. Morris , 1997, Trends in Cognitive Sciences.