Co-occurrence-based adaptive background model for robust object detection

An illumination-invariant background model for detecting objects in dynamic scenes is proposed. It is robust in the cases of sudden illumination fluctuation as well as burst moving background. Unlike previous works, it distinguishes objects from a dynamic background using co-occurrence character between a target pixel and its supporting pixels in the form of multiple pixel pairs. Experiments used several challenging datasets that proved the robust performance of object detection in various environments.

[1]  Alex Pentland,et al.  Pfinder: real-time tracking of the human body , 1996, Proceedings of the Second International Conference on Automatic Face and Gesture Recognition.

[2]  I. Haritaoglu,et al.  Background and foreground modeling using nonparametric kernel density estimation for visual surveillance , 2002 .

[3]  Kentaro Toyama,et al.  Wallflower: principles and practice of background maintenance , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[4]  W. Eric L. Grimson,et al.  Adaptive background mixture models for real-time tracking , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[5]  Yaser Sheikh,et al.  Bayesian modeling of dynamic scenes for object detection , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  Yutaka Satoh,et al.  Object detection based on a robust and accurate statistical multi-point-pair model , 2011, Pattern Recognit..

[7]  W. Eric L. Grimson,et al.  Learning Patterns of Activity Using Real-Time Tracking , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[8]  Xinyue Zhao,et al.  Statistical spatial multi-pixel-pair model for object detection , 2012, 2012 International Symposium on Optomechatronic Technologies (ISOT 2012).

[9]  Larry S. Davis,et al.  Real-time foreground-background segmentation using codebook model , 2005, Real Time Imaging.