Selective rendering for efficient ray traced stereoscopic images

Depth-related visual effects are a key feature of many virtual environments. In stereo-based systems, the depth effect can be produced by delivering frames of disparate image pairs, while in monocular environments, the viewer has to extract this depth information from a single image by examining details such as perspective and shadows. This paper investigates via a number of psychophysical experiments, whether we can reduce computational effort and still achieve perceptually high-quality rendering for stereo imagery. We examined selectively rendering the image pairs by exploiting the fusing capability and depth perception underlying human stereo vision. In ray-tracing-based global illumination systems, a higher image resolution introduces more computation to the rendering process since many more rays need to be traced. We first investigated whether we could utilise the human binocular fusing ability and significantly reduce the resolution of one of the image pairs and yet retain a high perceptual quality under stereo viewing condition. Secondly, we evaluated subjects’ performance on a specific visual task that required accurate depth perception. We found that subjects required far fewer rendered depth cues in the stereo viewing environment to perform the task well. Avoiding rendering these detailed cues saved significant computational time. In fact it was possible to achieve a better task performance in the stereo viewing condition at a combined rendering time for the image pairs less than that required for the single monocular image. The outcome of this study suggests that we can produce more efficient stereo images for depth-related visual tasks by selective rendering and exploiting inherent features of human stereo vision.

[1]  Peter Shirley,et al.  Visual cues for imminent object contact in realistic virtual environments , 2000, Proceedings Visualization 2000. VIS 2000 (Cat. No.00CH37145).

[2]  Melvyn A. Goodale,et al.  The role of binocular vision in prehension: a kinematic analysis , 1992, Vision Research.

[3]  Alan Watt,et al.  The computer image , 1998 .

[4]  G. W. Larson,et al.  Rendering with radiance - the art and science of lighting visualization , 2004, Morgan Kaufmann series in computer graphics and geometric modeling.

[5]  Gregory J. Ward,et al.  The RADIANCE lighting simulation and rendering system , 1994, SIGGRAPH.

[6]  Mark Steedman,et al.  Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents , 1994, SIGGRAPH.

[7]  Donald P. Greenberg,et al.  An experimental evaluation of computer graphics imagery , 1986, TOGS.

[8]  Alan Chalmers,et al.  Detail to Attention: Exploiting Visual Tasks for Selective Rendering , 2003, Rendering Techniques.

[9]  Taku Komura,et al.  Topology matching for fully automatic similarity estimation of 3D shapes , 2001, SIGGRAPH.

[10]  Franz Faul,et al.  Highlight disparity contributes to the authenticity and strength of perceived glossiness. , 2008, Journal of vision.

[11]  James T. Kajiya,et al.  The rendering equation , 1986, SIGGRAPH.

[12]  Donald P. Greenberg,et al.  Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments , 2001, TOGS.

[13]  Eric Horvitz,et al.  Perception, Attention, and Resources: A Decision-Theoretic Approach to Graphics Rendering , 1997, UAI.

[14]  Gary W. Meyer,et al.  Visual cues and pictorial limitations for computer generated photorealistic images , 1992, The Visual Computer.

[15]  K. Fujii,et al.  Visualization for the analysis of fluid motion , 2005, J. Vis..

[16]  Jeffrey S. McVeigh,et al.  Compression and interpolation of 3D stereoscopic and multiview video , 1997, Electronic Imaging.

[17]  Lars Kjelldahl,et al.  A study on how depth perception is affected by different presentation methods of 3D objects on a 2D display , 1995, Comput. Graph..

[18]  Peter Shirley,et al.  Visual cues for imminent object contact in realistic virtual environments , 2000 .

[19]  Geoffrey S. Hubona,et al.  The relative contributions of stereo, lighting, and background scenes in promoting 3D depth visualization , 1999, TCHI.

[20]  Sig Badt,et al.  Two algorithms for taking advantage of temporal coherence in ray tracing , 1988, The Visual Computer.

[21]  Jung-Young Son,et al.  Synthesis of a high-resolution 3D stereoscopic image pair from a high-resolution monoscopic image and a low-resolution depth map , 1998, Electronic Imaging.

[22]  Ann McNamara,et al.  Comparing Real & Synthetic Scenes using Human Judgements of Lightness , 2000, Rendering Techniques.

[23]  Michael G. Perkins,et al.  Data compression of stereopairs , 1992, IEEE Trans. Commun..

[24]  Larry F. Hodges,et al.  Visible surface ray-tracing of stereoscopic images , 1992, ACM-SE 30.

[25]  Donald P. Greenberg,et al.  Perceiving spatial relationships in computer-generated images , 1992, IEEE Computer Graphics and Applications.

[26]  William B. Thompson,et al.  Visual Cues for Perceiving Distances from Objects to Surfaces , 2002, Presence: Teleoperators & Virtual Environments.

[27]  Leonard Wanger,et al.  The effect of shadow quality on the perception of spatial relationships in computer generated imagery , 1992, I3D '92.

[28]  Hans-Peter Seidel,et al.  Perception-guided global illumination solution for animation rendering , 2001, SIGGRAPH.