Selective quality rendering by exploiting human inattentional blindness: looking but not seeing

There are two major influences on human visual attention: bottom-up and top-down processing. Bottom-up processing is the automatic direction of gaze to lively or colourful objects as determined by low-level vision. In contrast, top-down processing is consciously directed attention in the pursuit of predetermined goals or tasks. Previous work in perception-based rendering has exploited bottom-up visual attention to control detail (and therefore time) spent on rendering parts of a scene. In this paper, we demonstrate the principle of Inattentional Blindness, a major side effect of top-down processing, where portions of the scene unrelated to the specific task go unnoticed. In our experiment, we showed a pair of animations rendered at different quality levels to 160 subjects, and then asked if they noticed a change. We instructed half the subjects to simply watch our animation, while the other half performed a specific task during the animation.When parts of the scene, outside the focus of this task, were rendered at lower quality, almost none of the task-directed subjects noticed, whereas the difference was clearly visible to the control group. Our results clearly show that top-down visual processing can be exploited to reduce rendering times substantially without compromising perceived visual quality in interactive tasks.

[1]  A. L. Yarbus Eye Movements During Perception of Complex Objects , 1967 .

[2]  J. Dowling The Retina: An Approachable Part of the Brain , 1988 .

[3]  D. Puro The Retina. An Approachable Part of the Brain , 1988 .

[4]  Peter Shirley,et al.  Visual navigation of large environments using textured clusters , 1995, I3D '95.

[5]  Donald P. Greenberg,et al.  A model of visual adaptation for realistic image synthesis , 1996, SIGGRAPH.

[6]  S. Yantis 2. Attentional capture in vision , 1996 .

[7]  Benjamin Watson,et al.  An Evaluation of Level of Detail Degradation in Head-Mounted Display Peripheries , 1997, Presence: Teleoperators & Virtual Environments.

[8]  Martin Reddy,et al.  Perceptually modulated level of detail for virtual environments , 1997 .

[9]  James Arvo,et al.  A framework for realistic image synthesis , 1997, SIGGRAPH.

[10]  Donald P. Greenberg,et al.  A multiscale model of adaptation and spatial vision for realistic image display , 1998, SIGGRAPH.

[11]  Gary W. Meyer,et al.  A perceptually based adaptive sampling algorithm , 1998, SIGGRAPH.

[12]  Scott Daly,et al.  Engineering observations from spatiovelocity and spatiotemporal visual models , 1998, Electronic Imaging.

[13]  Donald P. Greenberg,et al.  A perceptually based physical error metric for realistic image synthesis , 1999, SIGGRAPH.

[14]  Ronald A. Rensink,et al.  Picture Changes During Blinks: Looking Without Seeing and Seeing Without Looking , 2000 .

[15]  C. Koch,et al.  A saliency-based search mechanism for overt and covert shifts of visual attention , 2000, Vision Research.

[16]  Ann McNamara,et al.  Comparing Real & Synthetic Scenes using Human Judgements of Lightness , 2000, Rendering Techniques.

[17]  H. Barlow Vision Science: Photons to Phenomenology by Stephen E. Palmer , 2000, Trends in Cognitive Sciences.

[18]  Hans-Peter Seidel,et al.  Perception-guided global illumination solution for animation rendering , 2001, SIGGRAPH.

[19]  Alan Chalmers,et al.  Change blindness with varying rendering fidelity: looking but not seeing , 2001 .

[20]  Benjamin Watson,et al.  Measuring and predicting visual fidelity , 2001, SIGGRAPH.

[21]  Michael E. Miller,et al.  Perceptual Effects of a Gaze-Contingent Multi-Resolution Display Based on a Model of Visual Sen-siti , 2001 .

[22]  Donald P. Greenberg,et al.  Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments , 2001, TOGS.

[23]  David P. Luebke,et al.  Perceptually-Driven Simplification for Interactive Rendering , 2001, Rendering Techniques.

[24]  A. Mack Inattentional Blindness , 2003 .