GPU-Based Selective Sparse Sampling for Interactive High-Fidelity Rendering

Physically-based renderers can produce highly realistic imagery; however such methods suffer from lengthy execution times, which make them impractical for use in interactive applications. Selective rendering exploits limitations in the human visual system to render images that are perceptually similar to high-fidelity renderings, in a fraction of the time. In this paper, we describe a novel GPU-based selective rendering algorithm that uses density of indirect lighting samples on the image plane as a selective variable. A high-speed saliency-guided mechanism is used to sample and evaluate a set of representative pixels locations on the image plane, yielding a sparse representation of indirect lighting in the scene. An image inpainting algorithm is used to reconstruct a dense representation of the indirect lighting component, which is then combined with the direct lighting component to produce the final rendering. Experimental evaluation demonstrates that our selective rendering algorithm achieves a good speedup when compared to standard interleaved sampling, and is significantly faster than a traditional GPU-based high-fidelity renderer.

[1]  Greg Humphreys,et al.  Physically Based Rendering, Second Edition: From Theory To Implementation , 2010 .

[2]  Veronica Sundstedt,et al.  Evaluation of perceptually-based selective rendering techniques using eye-movements analysis , 2006, SCCG.

[3]  Wolfgang Heidrich,et al.  HDR-VDP-2: a calibrated visual metric for visibility and quality predictions in all luminance conditions , 2011, ACM Trans. Graph..

[4]  Adam Arbree,et al.  To appear in the ACM SIGGRAPH conference proceedings Lightcuts: A Scalable Approach to Illumination , 2022 .

[5]  Tingting Xu,et al.  A high-speed multi-GPU implementation of bottom-up attention using CUDA , 2009, 2009 IEEE International Conference on Robotics and Automation.

[6]  Luís Paulo Santos,et al.  Instant Global Illumination on the GPU using OptiX , 2010 .

[7]  Veronica Sundstedt,et al.  Visual attention for efficient high-fidelity graphics , 2005, SCCG '05.

[8]  Sungkil Lee,et al.  Real-time tracking of visually attended objects in interactive virtual environments , 2007, VRST '07.

[9]  Philipp Slusallek,et al.  Interactive Global Illumination using Fast Ray Tracing , 2002, Rendering Techniques.

[10]  Alan Chalmers,et al.  Detail to Attention: Exploiting Visual Tasks for Selective Rendering , 2003, Rendering Techniques.

[11]  Kurt Debattista,et al.  Adaptive Interleaved Sampling for Interactive High‐Fidelity Rendering , 2009, Comput. Graph. Forum.

[12]  Wolfgang Heidrich,et al.  Interleaved Sampling , 2001, Rendering Techniques.

[13]  Luís Paulo Santos,et al.  Selective component-based rendering , 2005, GRAPHITE '05.

[14]  Greg Humphreys,et al.  Physically Based Rendering: From Theory to Implementation , 2004 .

[15]  Bernard Péroche,et al.  Non-interleaved deferred shading of interleaved sample patterns , 2006, GH '06.

[16]  Kurt Debattista Selective rendering for high-fidelity graphics , 2006 .

[17]  Stefan Lindholm,et al.  Accounting for Uncertainty in Medical Data: A CUDA Implementation of Normalized Convolution , 2011, SIGRAD.

[18]  C. Koch,et al.  A saliency-based search mechanism for overt and covert shifts of visual attention , 2000, Vision Research.

[19]  Liqing Zhang,et al.  Saliency Detection: A Spectral Residual Approach , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[20]  Kavita Bala,et al.  Matrix row-column sampling for the many-light problem , 2007, ACM Trans. Graph..

[21]  Kurt Debattista,et al.  A GPU based saliency map for high-fidelity selective rendering , 2006, AFRIGRAPH '06.

[22]  Donald P. Greenberg,et al.  A perceptually based physical error metric for realistic image synthesis , 1999, SIGGRAPH.

[23]  Hans-Peter Seidel,et al.  Perception-guided global illumination solution for animation rendering , 2001, SIGGRAPH.