Moving Foreground Detection Based On Spatio-temporal Saliency

Detection of moving foreground in video is very important for many applications, such as visual surveillance, objectbased video coding, etc. When objects move with different speeds and under illumination changes, the robustness of moving object detection methods proposed so far is still not satisfactory. In this paper, we use the semantic information to adjust the pixel-wise learning rate adaptively for more robust detection performance, which are obtained by spatial saliency map based on Gaussian mixture model (GMM) in luma space and temporal saliency map obtained by background subtraction. In addition, we design a two-pass background estimation framework, in which the initial estimation is used for temporal saliency estimation, and the other is to detect foreground and update model parameters. The experimental results show that our method can achieve better moving object extraction performance than the existing background subtraction method based on GMM.

[1]  Nikos Paragios,et al.  Motion-based background subtraction using adaptive kernel density estimation , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[2]  Maher Ben Jemaa,et al.  Object-based Video compression using neural networks , 2011 .

[3]  Larry S. Davis,et al.  Non-parametric Model for Background Subtraction , 2000, ECCV.

[4]  Ying Sun,et al.  A hierarchical approach to color image segmentation using homogeneity , 2000, IEEE Trans. Image Process..

[5]  Ferdinand van der Heijden,et al.  Efficient adaptive density estimation per image pixel for the task of background subtraction , 2006, Pattern Recognit. Lett..

[6]  R. Revathi,et al.  Video Surveillance Systems - A Survey , 2011 .

[7]  W. Eric L. Grimson,et al.  Adaptive background mixture models for real-time tracking , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[8]  Laurent Itti,et al.  Realistic avatar eye and head animation using a neurobiological model of visual attention , 2004, SPIE Optics + Photonics.

[9]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .