The effect of translational ego-motion on the perception of high fidelity animations

The quality of the graphics displayed in motion simulators can play a signficant role in improving a user's training experience in such devices. However, the computation of high-fidelity graphics using traditional rendering approaches takes a substantial amount of time, precluding their use in such an interactive environment. This paper investigates exploiting how the human visual system deals with motion, to drive a selective rendering system. Such a selective renderer computes perceptually important parts of a scene in high quality and the remainder of the scene at a lower quality, and thus at a much reduced computational cost, without the user being aware of this quality difference. In this study we concentrate on translational motion and show, even for this less dramatic form of motion, a viewer's perception of a scene can be significantly effected. A study was conducted involving 120 subjects in 8 conditions. An additional 'button-press' study of 26 subjects was also carried out. The results show that, for both studies, viewers could not notice a decrease in rendering quality when subjected to motion.

[1]  Kurt Debattista,et al.  Snapshot: A rapid technique for driving a selective global illumination renderer , 2005 .

[2]  J. Dichgans,et al.  Visual-Vestibular Interaction: Effects on Self-Motion Perception and Postural Control , 1978 .

[3]  Rachel McDonnell,et al.  Perceptually Adaptive Graphics , 2004, Eurographics.

[4]  Veronica Sundstedt,et al.  Selective rendering using task-importance maps , 2004, APGV '04.

[5]  Charles Clark Ormsby,et al.  Model of human dynamic orientation , 1974 .

[6]  Alan Chalmers,et al.  Selective quality rendering by exploiting human inattentional blindness: looking but not seeing , 2002, VRST '02.

[7]  A. Bethe,et al.  Handbuch der Normalen und Pathologischen Physiologie , 1925 .

[8]  L. Young,et al.  A revised dynamic otolith model. , 1968, Aerospace medicine.

[9]  Karol Myszkowski,et al.  Global Illumination for Interactive Applications and High-Quality Animations , 2002, Eurographics.

[10]  Veronica Sundstedt,et al.  Top-Down Visual Attention for Efficient Rendering of Task Related Scenes , 2004, VMV.

[11]  R. Carpenter,et al.  Movements of the Eyes , 1978 .

[12]  Donald P. Greenberg,et al.  Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments , 2001, TOGS.

[13]  R. V. Parrish,et al.  Motion software for a synergistic six-degree-of-freedom motion base , 1973 .

[14]  Mel Slater,et al.  A Virtual Presence Counter , 2000, Presence: Teleoperators & Virtual Environments.

[15]  Gerd Marmitt,et al.  Modeling Visual Attention in VR: Measuring the Accuracy of Predicted Scanpaths , 2002, Eurographics.

[16]  Heinrich H. Bülthoff,et al.  Enhancing the Visually Induced Self-Motion Illusion (Vection) under Natural Viewing Conditions in Virtual Reality , 2004 .

[17]  Greg L Zacharias Motion Cue Models for Pilot-Vehicle Analysis , 1978 .

[18]  Kurt Debattista,et al.  Analytic Antialiasing for Selective High Fidelity Rendering , 2005, XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05).

[19]  Camille Goudeseune,et al.  Synchronous data collection from diverse hardware , 2004 .

[20]  Karol Myszkowski,et al.  The Visible Differences Predictor: Applications to Global Illumination Problems , 1998, Rendering Techniques.

[21]  W. Steinhausen,et al.  Über den Nachweis der Bewegung der Cupula in der intakten Bogengangsampulle des Labyrinthes bei der natürlichen rotatorischen und calorischen Reizung , 1931, Pflüger's Archiv für die gesamte Physiologie des Menschen und der Tiere.