Gaze-Contingent Computational Displays: Boosting perceptual fidelity

Contemporary digital displays feature multimillion pixels at ever-increasing refresh rates. Reality, on the other hand, provides us with a view of the world that is continuous in space and in time. The discrepancy between viewing the physical world and its sampled depiction on digital displays gives rise to perceptual quality degradation. By measuring or estimating where we look, a new breed of gaze-contingent algorithms aims to exploit the way we visually perceive digital images and videos to remedy visible artifacts. In this article, we provide an overview of recent developments in computational display algorithms that enhance perceived visual quality of conventional video footage when viewed on commodity monitors, projectors, or headmounted displays (HMDs).

[1]  Marc Levoy,et al.  Gaze-directed volume rendering , 1990, I3D '90.

[2]  Marcus A. Magnor,et al.  High Resolution Image Correspondences for Video Post-Production , 2010, 2010 Conference on Visual Media Production.

[3]  Marcus A. Magnor,et al.  Optimizing Apparent Display Resolution Enhancement for Arbitrary Videos , 2013, IEEE Transactions on Image Processing.

[4]  Andrew T. Duchowski,et al.  EUROGRAPHICS 2001 / Jonathan C. Roberts Short Presentations Gaze-Contingent Level Of Detail Rendering , 2022 .

[5]  Wilson S. Geisler,et al.  Gaze-contingent real-time simulation of arbitrary visual fields , 2002, IS&T/SPIE Electronic Imaging.

[6]  Ann McNamara,et al.  Subtle gaze direction , 2009, TOGS.

[7]  S. Klein,et al.  Vernier acuity, crowding and cortical magnification , 1985, Vision Research.

[8]  Marcus A. Magnor,et al.  Adaptive Image‐Space Sampling for Gaze‐Contingent Real‐time Rendering , 2016, Comput. Graph. Forum.

[9]  Hans-Peter Seidel,et al.  Apparent display resolution enhancement for moving images , 2010, ACM Trans. Graph..

[10]  I. Rentschler,et al.  Peripheral vision and pattern recognition: a review. , 2011, Journal of vision.

[11]  Arzu Çöltekin,et al.  Foveated gaze-contingent displays for peripheral LOD management, 3D visualization, and stereo imaging , 2007, TOMCCAP.

[12]  Hans-Peter Seidel,et al.  Apparent resolution enhancement for animations , 2011, SCC.

[13]  Martin S Banks,et al.  Temporal presentation protocols in stereoscopic displays: Flicker visibility, perceived motion, and perceived depth , 2011, Journal of the Society for Information Display.

[14]  Lester C. Loschky,et al.  Gaze-Contingent Multiresolutional Displays: An Integrative Review , 2003, Hum. Factors.

[15]  Marcus A. Magnor,et al.  An Affordable Solution for Binocular Eye Tracking and Calibration in Head-mounted Displays , 2015, ACM Multimedia.

[16]  Qi Zhao,et al.  Learning saliency-based visual attention: A review , 2013, Signal Process..

[17]  David P. Luebke,et al.  Perceptually-Driven Simplification for Interactive Rendering , 2001, Rendering Techniques.

[18]  Hideyuki Tamura,et al.  Gaze-directed adaptive rendering for interacting with virtual space , 1996, Proceedings of the IEEE 1996 Virtual Reality Annual International Symposium.

[19]  Yael Pritch,et al.  Saliency filters: Contrast based filtering for salient region detection , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[20]  Ernst Niebur,et al.  A feasibility test for perceptually adaptive level of detail rendering on desktop systems , 2004, APGV '04.

[21]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[22]  Lester C. Loschky,et al.  How late can you update gaze-contingent multiresolutional displays without detection? , 2007, TOMCCAP.

[23]  M. Magnor,et al.  Visualization and Analysis of Head Movement and Gaze Data for Immersive Video in Head-mounted Displays , 2015 .

[24]  Andrew Hollingworth,et al.  Eye Movements During Scene Viewing: An Overview , 1998 .

[25]  Ali Borji,et al.  State-of-the-Art in Visual Attention Modeling , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[26]  Miguel A. Nacenta,et al.  Depth perception with gaze-contingent depth of field , 2014, CHI.

[27]  Kenneth Holmqvist,et al.  Eye tracking: a comprehensive guide to methods and measures , 2011 .

[28]  Frédo Durand,et al.  A Benchmark of Computational Models of Saliency to Predict Human Fixations , 2012 .

[29]  Marcus A. Magnor,et al.  Temporal Video Filtering and Exposure Control for Perceptual Motion Blur , 2015, IEEE Transactions on Visualization and Computer Graphics.

[30]  M. Pickering,et al.  Eye guidance in reading and scene perception , 1998 .

[31]  Dario Cazzato,et al.  An Investigation on the Feasibility of Uncalibrated and Unconstrained Gaze Tracking for Human Assistive Applications by Using Head Pose Estimation , 2014, Sensors.