Saliency-based image editing for guiding visual attention

The most important part of an information system that assists human activities is a natural interface with human beings. Gaze information strongly reflects the human interest or their attention, and thus, a gaze-based interface is promising for future usage. In particular, if we can smoothly guide the user's visual attention toward a target without interrupting their current visual attention, the usefulness of the gaze-based interface will be highly enhanced. To realize such an interface, this paper proposes a method for editing an image, when given a region in the image, to synthesize the image in which the region is most salient. Our method first computes a saliency map of a given image and then iteratively adjusts the intensity and color until the saliency inside the region becomes the highest for the entire image. Experimental results confirm that our image editing method naturally draws the human visual attention toward our specified region.

[1]  D. Perny,et al.  Perspective mapping of planar textures , 1982, COMG.

[2]  S Ullman,et al.  Shifts in selective visual attention: towards the underlying neural circuitry. , 1985, Human neurobiology.

[3]  Pietro Perona,et al.  Overcomplete steerable pyramid filters and rotation invariance , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[4]  Douglas DeCarlo,et al.  Stylization and abstraction of photographs , 2002, ACM Trans. Graph..

[5]  John Collomosse,et al.  Painterly rendering using image salience , 2002, Proceedings 20th Eurographics UK Conference.

[6]  Tetsuo Ono,et al.  Physical relation and expression: joint attention for human-robot interaction , 2003, IEEE Trans. Ind. Electron..

[7]  Katsushi Ikeuchi,et al.  Flexible cooperation between human and robot by interpreting human intention from gaze information , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[8]  David Salesin,et al.  Gaze-based interaction for semi-automatic photo cropping , 2006, CHI.

[9]  Ann McNamara,et al.  Subtle gaze direction , 2007, SIGGRAPH '07.

[10]  Amitabh Varshney,et al.  Persuading Visual Attention through Geometry , 2008, IEEE Transactions on Visualization and Computer Graphics.

[11]  Ann McNamara,et al.  Improving search task performance using subtle gaze direction , 2008, APGV '08.

[12]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[13]  Keiji Uchikawa,et al.  Does a stimulus not detected induce saccade , 2009 .

[14]  P. McOwan,et al.  Generating customised experimental stimuli for visual search using Genetic Algorithms shows evidence for a continuum of search efficiency , 2009, Vision Research.

[15]  Yoshinori Kobayashi,et al.  Smart Wheelchair Navigation Based on User's Gaze on Destination , 2010, ICIC.

[16]  Zygmunt Pizlo,et al.  Camera Motion-Based Analysis of User Generated Video , 2010, IEEE Transactions on Multimedia.

[17]  Steven K. Feiner,et al.  Directing attention and influencing memory with visual saliency modulation , 2011, CHI.