Motion Compensation Based Fast Moving Object Detection in Dynamic Background

This paper investigates robust and fast moving object detection in dynamic background. A motion compensation based approach is proposed to maintain an online background model, then the moving objects are detected in a fast fashion. Specifically, the pixel-level background model is built for each pixel, and is represented by a set of pixel values drawn from its location and neighborhoods. Given the background models of previous frame, the edge-preserving optical flow algorithm is employed to estimate the motion of each pixel, followed by propagating their background models to the current frame. Each pixel can be classified as foreground or background pixel according to the compensated background model. Moreover, the compensated background model is updated online by a fast random algorithm to adapt the variation of background. Extensive experiments on collected challenging videos suggest that our method outperforms other state-of-the-art methods, and achieves 8 fps in efficiency.

[1]  Dar-Shyang Lee,et al.  Effective Gaussian mixture learning for video background subtraction , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  P. KaewTrakulPong,et al.  An Improved Adaptive Background Mixture Model for Real-time Tracking with Shadow Detection , 2002 .

[3]  In-So Kweon,et al.  Adaptive Support-Weight Approach for Correspondence Search , 2006, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  Xiaowei Zhou,et al.  Moving Object Detection by Detecting Contiguous Outliers in the Low-Rank Representation , 2011, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[5]  Carsten Rother,et al.  Fast cost-volume filtering for visual correspondence and beyond , 2011, CVPR 2011.

[6]  Marc Van Droogenbroeck,et al.  Background subtraction: Experiments and improvements for ViBe , 2012, 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.

[7]  Richard Szeliski,et al.  A Database and Evaluation Methodology for Optical Flow , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[8]  Marc Van Droogenbroeck,et al.  ViBe: A Universal Background Subtraction Algorithm for Video Sequences , 2011, IEEE Transactions on Image Processing.

[9]  Z. Zivkovic Improved adaptive Gaussian mixture model for background subtraction , 2004, ICPR 2004.

[10]  Hailin Jin,et al.  Fast Edge-Preserving PatchMatch for Large Displacement Optical Flow , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[11]  Richard Szeliski,et al.  A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms , 2001, International Journal of Computer Vision.

[12]  Ruigang Yang,et al.  Spatial-Depth Super Resolution for Range Images , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[13]  W. Eric L. Grimson,et al.  Adaptive background mixture models for real-time tracking , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[14]  Adam Finkelstein,et al.  PatchMatch: a randomized correspondence algorithm for structural image editing , 2009, SIGGRAPH 2009.

[15]  Richard Szeliski,et al.  Computer Vision - Algorithms and Applications , 2011, Texts in Computer Science.

[16]  Jian Sun,et al.  Computing nearest-neighbor fields via Propagation-Assisted KD-Trees , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[17]  Narendra Ahuja,et al.  An edge-preserving filtering framework for visibility restoration , 2012, Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012).

[18]  Sylvain Paris,et al.  SimpleFlow: A Non‐iterative, Sublinear Optical Flow Algorithm , 2012, Comput. Graph. Forum.