Touch saliency

In this work, we propose a new concept of touch saliency, and attempt to answer the question of whether the underlying image saliency map may be implicitly derived from the accumulative touch behaviors (or more specifically speaking, zoom-in and panning manipulations) when many users browse the image on smart mobile devices with multi-touch display of small size. The touch saliency maps are collected for the images of the recently released NUSEF dataset, and the preliminary comparison study demonstrates: 1) the touch saliency map is highly correlated with human eye fixation map for the same stimuli, yet compared to the latter, the touch data collection is much more flexible and requires no cooperation from users; and 2) the touch saliency is also well predictable by popular saliency detection algorithms. This study opens a new research direction of multimedia analysis by harnessing human touch information on increasingly popular multi-touch smart mobile devices.

[1]  P. Lang International affective picture system (IAPS) : affective ratings of pictures and instruction manual , 2005 .

[2]  Nanning Zheng,et al.  Learning to Detect a Salient Object , 2011, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[3]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[4]  Krista A. Ehinger,et al.  Modelling search for people in 900 scenes: A combined source model of eye guidance , 2009 .

[5]  Michael Lindenbaum,et al.  Esaliency (Extended Saliency): Meaningful Attention Using Stochastic Image Modeling , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  Umesh Rajashekar,et al.  DOVES: a database of visual eye movements. , 2009, Spatial vision.

[7]  Pietro Perona,et al.  Graph-Based Visual Saliency , 2006, NIPS.

[8]  Meng Wang,et al.  Video accessibility enhancement for hearing-impaired users , 2011, TOMCCAP.

[9]  Ramesh Raskar,et al.  Automatic image retargeting , 2004, SIGGRAPH '04.

[10]  Patrick Le Callet,et al.  A coherent computational approach to model bottom-up visual attention , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  Meng Wang,et al.  In-Image Accessibility Indication , 2010, IEEE Transactions on Multimedia.

[12]  Liqing Zhang,et al.  Dynamic visual attention: searching for coding length increments , 2008, NIPS.

[13]  John K. Tsotsos,et al.  Saliency Based on Information Maximization , 2005, NIPS.

[14]  John K. Tsotsos,et al.  Saliency, attention, and visual search: an information theoretic approach. , 2009, Journal of vision.

[15]  Harish Katti,et al.  An Eye Fixation Database for Saliency Detection in Images , 2010, ECCV.

[16]  Frédo Durand,et al.  Learning to predict where humans look , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[17]  Meng Wang,et al.  Dynamic captioning: video accessibility enhancement for hearing impairment , 2010, ACM Multimedia.