Focus and Context in Mixed Reality by Modulating First Order Salient Features

We present a technique for dynamically directing a viewer's attention to a focus object by analyzing and modulating bottom-up salient features of a video feed. Rather than applying a static modulation strategy, we inspect the original image's saliency map, and modify the image automatically to favor the focus object. Image fragments are adaptively darkened, lightened and manipulated in hue according to local contrast information rather than global parameters. The goal is to suggest rather than force the attention of the user towards a specific location. The technique's goal is to apply only minimal changes to an image, while achieving a desired difference of saliency between focus and context regions of the image. Our technique exhibits temporal and spatial coherence and runs at interactive frame rates using GPU shaders. We present several application examples from the field of Mixed Reality, or more precisely Mediated Reality.

[1]  S. Engel,et al.  Colour tuning in human visual cortex measured with functional magnetic resonance imaging , 1997, Nature.

[2]  A. Linksz Outlines of a Theory of the Light Sense. , 1965 .

[3]  Ann McNamara,et al.  Subtle gaze direction , 2009, TOGS.

[4]  Gudrun Klinker,et al.  Supporting order picking with Augmented Reality , 2008, 2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality.

[5]  S Ullman,et al.  Shifts in selective visual attention: towards the underlying neural circuitry. , 1985, Human neurobiology.

[6]  Helwig Hauser,et al.  An Interaction View on Information Visualization , 2003, Eurographics.

[7]  Frédo Durand,et al.  De-emphasis of distracting image regions using texture power maps , 2005, APGV '05.

[8]  Douglas DeCarlo,et al.  Visual interest and NPR: an evaluation and manifesto , 2004, NPAR '04.

[9]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[10]  Stephen J. Sangwine,et al.  The Colour Image Processing Handbook (Optoelectronics, Imaging and Sensing) , 1998 .

[11]  S. Sangwine,et al.  The Colour Image Processing Handbook , 1998, Springer US.

[12]  Allen R. Hanson,et al.  Computer Vision Systems , 1978 .

[13]  Ernst Niebur Saliency map , 2007, Scholarpedia.

[14]  D. Jameson,et al.  An opponent-process theory of color vision. , 1957, Psychological review.

[15]  R. Rosenholtz A simple saliency model predicts a number of motion popout phenomena , 1999, Vision Research.

[16]  L. Spillmann,et al.  Visual Perception: The Neurophysiological Foundations , 1989 .

[17]  Sabine Süsstrunk,et al.  Salient Region Detection and Segmentation , 2008, ICVS.

[18]  A. Treisman,et al.  A feature-integration theory of attention , 1980, Cognitive Psychology.

[19]  Amitabh Varshney,et al.  Saliency-guided Enhancement for Volume Visualization , 2006, IEEE Transactions on Visualization and Computer Graphics.

[20]  Bernhard Schölkopf,et al.  A Nonparametric Approach to Bottom-Up Visual Saliency , 2006, NIPS.

[21]  Frédo Durand,et al.  Learning to predict where humans look , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[22]  Christof Koch,et al.  Predicting human gaze using low-level saliency combined with face detection , 2007, NIPS.

[23]  Amitabh Varshney,et al.  Persuading Visual Attention through Geometry , 2008, IEEE Transactions on Visualization and Computer Graphics.

[24]  Steven K. Feiner,et al.  Experiences on Attention Direction through Manipulation of Salient Features , 2010 .

[25]  Antonio Torralba,et al.  Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. , 2006, Psychological review.

[26]  Sungkil Lee,et al.  Real-time tracking of visually attended objects in interactive virtual environments , 2007, VRST '07.

[27]  Heinz Hügli,et al.  Empirical Validation of the Saliency-based Model of Visual Attention , 2003 .