Improved codebook model based on spatio-temporal context

Spatio-temporal context refers to the information of each pixel's historical status and its adjacent pixels. The proposed algorithm applies the spatio-temporal context to the detection of foregrounds. It is based on the codebook model. For each pixel, a weight value is calculated according to the spatio-temporal context of this pixel to influence the detecting conditions. The algorithm can make the detecting results more accurate, especially in the interference regions such as the waving trees and the sudden illumination.

[1]  Li Yi,et al.  Object Detection Using Shape Codebook , 2007, BMVC.

[2]  Anni Cai,et al.  Hierarchical codebook background model using haar-like features , 2012, 2012 3rd IEEE International Conference on Network Infrastructure and Digital Content.

[3]  Larry S. Davis,et al.  Real-time foreground-background segmentation using codebook model , 2005, Real Time Imaging.

[4]  Xu Fang,et al.  Object Detection in Dynamic Scenes Based on Codebook with Superpixels , 2013, 2013 2nd IAPR Asian Conference on Pattern Recognition.

[5]  Stan Z. Li,et al.  Markov Random Field Modeling in Computer Vision , 1995, Computer Science Workbench.

[6]  Mohan M. Trivedi,et al.  "Hybrid Cone-Cylinder" Codebook Model for Foreground Detection with Shadow and Highlight Suppression , 2006, 2006 IEEE International Conference on Video and Signal Based Surveillance.

[7]  Satoshi Goto,et al.  Block-based codebook model with oriented-gradient feature for real-time foreground detection , 2011, 2011 IEEE 13th International Workshop on Multimedia Signal Processing.

[8]  Rita Cucchiara,et al.  Improving shadow suppression in moving object detection with HSV color information , 2001, ITSC 2001. 2001 IEEE Intelligent Transportation Systems. Proceedings (Cat. No.01TH8585).

[9]  M. Fathy,et al.  Real-time Background Modeling/Subtraction using Two-Layer Codebook Model , 2008 .

[10]  Mingjun Wu,et al.  Spatio-temporal context for codebook-based dynamic background subtraction , 2010 .