Visual saliency model based on minimum description length

In this paper, a novel patch-wise visual saliency model based on Minimum Length Description (MDL) principle is presented. Visual saliency is measured as the unpredicted information of image patch through an order-adaptive predictor under MDL principle. Specifically, each image patch is estimated with a linear combination of several neighboring patches. The number and location of candidate patches are automatically tuned to local contexts based on MDL. Then the entropy of prediction residuals of center patch, which represents the surprise to the visual system, is used to measure the saliency. Furthermore, a structural redundancy operator is also involved to improve the saliency detection performance. Experimental results demonstrate that the predictor under MDL principle along with the structural redundancy operator can improve the accuracy of human fixations prediction. We show that the proposed model outperforms the mainstream algorithms in predicting human fixations.

[1]  Wenjun Zhang,et al.  Adaptive Sequential Prediction of Multidimensional Signals With Applications to Lossless Image Coding , 2011, IEEE Transactions on Image Processing.

[2]  Gert Kootstra,et al.  Paying Attention to Symmetry , 2008, BMVC.

[3]  Guangtao Zhai,et al.  Spatial Error Concealment With an Adaptive Linear Predictor , 2015, IEEE Transactions on Circuits and Systems for Video Technology.

[4]  Lihi Zelnik-Manor,et al.  Context-Aware Saliency Detection , 2012, IEEE Trans. Pattern Anal. Mach. Intell..

[5]  Yin Li,et al.  Incremental sparse saliency detection , 2009, 2009 16th IEEE International Conference on Image Processing (ICIP).

[6]  Laurent Itti,et al.  Automatic foveation for video compression using a neurobiological model of visual attention , 2004, IEEE Transactions on Image Processing.

[7]  Xiaokang Yang,et al.  Image inpainting with adaptive linear predictor , 2015, 2015 IEEE International Conference on Multimedia and Expo (ICME).

[8]  Liqing Zhang,et al.  Dynamic visual attention: searching for coding length increments , 2008, NIPS.

[9]  Harish Katti,et al.  An Eye Fixation Database for Saliency Detection in Images , 2010, ECCV.

[10]  Zhaoping Li A saliency map in primary visual cortex , 2002, Trends in Cognitive Sciences.

[11]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[12]  Yin Li,et al.  Visual Saliency Based on Conditional Entropy , 2009, ACCV.

[13]  John K. Tsotsos,et al.  Saliency Based on Information Maximization , 2005, NIPS.

[14]  Yu Fu,et al.  Visual saliency detection by spatially weighted dissimilarity , 2011, CVPR 2011.

[15]  Tim K Marks,et al.  SUN: A Bayesian framework for saliency using natural statistics. , 2008, Journal of vision.

[16]  Frédo Durand,et al.  Learning to predict where humans look , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[17]  Martin D. Levine,et al.  Visual Saliency Based on Scale-Space Analysis in the Frequency Domain , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  Christof Koch,et al.  Image Signature: Highlighting Sparse Salient Regions , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[19]  Weisi Lin,et al.  Visual Saliency Detection With Free Energy Theory , 2015, IEEE Signal Processing Letters.

[20]  Liqing Zhang,et al.  Saliency Detection: A Spectral Residual Approach , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.