Real-time Rendering with Compressed Animated Light Fields

We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive representation, we display the content in real-time according to the tracked head pose. For each frame, we generate a set of cubemap images per frame (colors and depths) using a sparse set of of cameras placed in the vicinity of the potential viewer locations. The cameras are placed with an optimization process so that the rendered data maximise coverage with minimum redundancy, depending on the lighting environment complexity. We compress the colors and depths separately, introducing an integrated spatial and temporal scheme tailored to high performance on GPUs for Virtual Reality applications. We detail a real-time rendering algorithm using multi-view ray casting and view dependent decompression. Compression rates of 150:1 and greater are demonstrated with quantitative analysis of image reconstruction quality and performance.

[1]  Yael Pritch,et al.  Scene reconstruction from high spatio-angular resolution light fields , 2013, ACM Trans. Graph..

[2]  C.-C. Jay Kuo,et al.  13.3: Efficient Direct Light‐Field Rendering for Autostereoscopic 3D Displays , 2015 .

[3]  M. Landy,et al.  The Plenoptic Function and the Elements of Early Vision , 1991 .

[4]  Kenny Mitchell,et al.  IRIDiuM: immersive rendered interactive deep media , 2016, SIGGRAPH VR Village.

[5]  Marc Levoy,et al.  Light field rendering , 1996, SIGGRAPH.

[6]  Aljoscha Smolic,et al.  Efficient Compression of Multi-View Video Exploiting Inter-View Dependencies Based on H.264/MPEG4-AVC , 2006, 2006 IEEE International Conference on Multimedia and Expo.

[7]  Frederic Dufaux,et al.  Performance evaluation of objective quality metrics for HDR image compression , 2014, Optics & Photonics - Optical Engineering + Applications.

[8]  Gino van den Bergen Efficient Collision Detection of Complex Deformable Models using AABB Trees , 1997, J. Graphics, GPU, & Game Tools.

[9]  Derek Nowrouzezahrai,et al.  Real-time global illumination using precomputed light field probes , 2017, I3D.

[10]  Aljoscha Smolic,et al.  Multi-View Video Plus Depth Representation and Coding , 2007, 2007 IEEE International Conference on Image Processing.

[11]  Sean Ellis,et al.  Adaptive scalable texture compression , 2012, EGGH-HPG'12.

[12]  Richard Szeliski,et al.  The lumigraph , 1996, SIGGRAPH.

[13]  Alec Wolman,et al.  Outatime: Using Speculation to Enable Low-Latency Continuous Interaction for Mobile Cloud Gaming , 2015, MobiSys.

[14]  Markus H. Gross,et al.  Online view sampling for estimating depth from light fields , 2015, 2015 IEEE International Conference on Image Processing (ICIP).

[15]  Richard Szeliski,et al.  A Comparison and Evaluation of Multi-View Stereo Reconstruction Algorithms , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[16]  Hans-Peter Seidel,et al.  Efficient Multi‐image Correspondences for On‐line Light Field Video Processing , 2016, Comput. Graph. Forum.

[17]  László Szirmay-Kalos,et al.  Approximate Ray‐Tracing on the GPU with Distance Impostors , 2005, Comput. Graph. Forum.

[18]  Aljoscha Smolic,et al.  The effects of multiview depth video compression on multiview rendering , 2009, Signal Process. Image Commun..

[19]  Y. Dobashi,et al.  Proxy-guided Image-based Rendering for Mobile Devices , 2016 .

[20]  Kenny Mitchell,et al.  Iterative Image Warping , 2012, Comput. Graph. Forum.

[21]  James T. Kajiya,et al.  The rendering equation , 1998 .

[22]  Alvaro Collet,et al.  High-quality streamable free-viewpoint video , 2015, ACM Trans. Graph..