A performance evaluation of fusion techniques for spatio-temporal saliency detection in dynamic scenes

Visual saliency is an important research topic in computer vision applications, which helps to focus on regions of interest instead of processing the whole image. Detecting visual saliency in still images has been widely addressed in literature. However, visual saliency detection in videos is more complicated due to additional temporal information. A spatio-temporal saliency map is usually obtained by the fusion of a static saliency map and a dynamic saliency map. The way both maps are fused plays a critical role in the accuracy of the spatio-temporal saliency map. In this paper, we evaluate the performances of different fusion techniques on a large and diverse dataset and the results show that a fusion method must be selected depending on the characteristics, in terms of color and motion contrasts, of a sequence. Overall, fusion techniques which take the best of each saliency map (static and dynamic) in the final spatio-temporal map achieve best results.

[1]  S. Süsstrunk,et al.  Frequency-tuned salient region detection , 2009, CVPR 2009.

[2]  Ali Borji,et al.  Salient Object Detection: A Benchmark , 2015, IEEE Transactions on Image Processing.

[3]  Nathalie Guyader,et al.  Modelling Spatio-Temporal Saliency to Predict Gaze Direction for Short Videos , 2009, International Journal of Computer Vision.

[4]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[5]  Laurent Itti,et al.  Biologically Inspired Mobile Robot Vision Localization , 2009, IEEE Transactions on Robotics.

[6]  Wonjun Kim,et al.  Spatiotemporal Saliency Detection and Its Applications in Static and Dynamic Scenes , 2011, IEEE Transactions on Circuits and Systems for Video Technology.

[7]  Sabine Süsstrunk,et al.  Frequency-tuned salient region detection , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[8]  Michael J. Black,et al.  The Robust Estimation of Multiple Motions: Parametric and Piecewise-Smooth Flow Fields , 1996, Comput. Vis. Image Underst..

[9]  J. Wolfe,et al.  Guided Search 2.0 A revised model of visual search , 1994, Psychonomic bulletin & review.

[10]  Liming Zhang,et al.  A Novel Multiresolution Spatiotemporal Saliency Detection Model and Its Applications in Image and Video Compression , 2010, IEEE Transactions on Image Processing.

[11]  Simone Frintrop,et al.  Computational Visual Attention , 2011, Computer Analysis of Human Behavior.

[12]  A. Treisman,et al.  A feature-integration theory of attention , 1980, Cognitive Psychology.

[13]  Changsheng Xu,et al.  Video based 3D reconstruction using spatio-temporal attention analysis , 2010, 2010 IEEE International Conference on Multimedia and Expo.

[14]  Ali Borji,et al.  State-of-the-Art in Visual Attention Modeling , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[15]  Yu Huang,et al.  Video retargeting with nonlinear spatial-temporal saliency fusion , 2010, 2010 IEEE International Conference on Image Processing.

[16]  Peng Jiang,et al.  Keyframe-Based Video Summary Using Visual Attention Clues , 2010, IEEE Multim..

[17]  Nuno Vasconcelos,et al.  Spatiotemporal Saliency in Dynamic Scenes , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  Bo Han,et al.  High Speed Visual Saliency Computation on GPU , 2007, 2007 IEEE International Conference on Image Processing.

[19]  Pong C. Yuen,et al.  Object motion detection using information theoretic spatio-temporal saliency , 2009, Pattern Recognit..

[20]  Hubert Konik,et al.  A Spatiotemporal Saliency Model for Video Surveillance , 2011, Cognitive Computation.

[21]  Narendra Ahuja,et al.  Saliency detection via divergence analysis: A unified perspective , 2012, Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012).

[22]  Gunnar Farnebäck,et al.  Two-Frame Motion Estimation Based on Polynomial Expansion , 2003, SCIA.