Background subtraction based on Local Shape

We present a novel approach to background subtraction that is based on the local shape of small image regions. In our approach, an image region centered on a pixel is mod-eled using the local self-similarity descriptor. We aim at obtaining a reliable change detection based on local shape change in an image when foreground objects are moving. The method first builds a background model and compares the local self-similarities between the background model and the subsequent frames to distinguish background and foreground objects. Post-processing is then used to refine the boundaries of moving objects. Results show that this approach is promising as the foregrounds obtained are com-plete, although they often include shadows.

[1]  Eli Shechtman,et al.  Matching Local Self-Similarities across Images and Videos , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[2]  Azriel Rosenfeld,et al.  Tracking Groups of People , 2000, Comput. Vis. Image Underst..

[3]  Rainer Lienhart,et al.  Comparing Local Feature Descriptors in pLSA-Based Image Models , 2008, DAGM-Symposium.

[4]  Guillaume-Alexandre Bilodeau,et al.  A Multiscale Region-Based Motion Detection and Background Subtraction Algorithm , 2010, Sensors.

[5]  W. Eric L. Grimson,et al.  Adaptive background mixture models for real-time tracking , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[6]  Guillaume-Alexandre Bilodeau,et al.  Local self-similarity as a dense stereo correspondence measure for themal-visible video registration , 2011, CVPR 2011 WORKSHOPS.