Tracking objects in video using motion and appearance models

This paper proposes a visual tracking algorithm that combines motion and appearance in a statistical framework. It is assumed that image observations are generated simultaneously from a background model and a target appearance model. This is different from conventional appearance-based tracking, that does not use motion information. The proposed algorithm attempts to maximize the likelihood ratio of the tracked region, derived from appearance and background models. Incorporation of motion in appearance based tracking provides robust tracking, even when the target violates the appearance model. We show that the proposed algorithm performs well in tracking targets efficiently over long time intervals.

[1]  Shaohua Kevin Zhou,et al.  Robust appearance-based tracking of moving object from moving platform , 2004, Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004..

[2]  Timothy J. Robinson,et al.  Sequential Monte Carlo Methods in Practice , 2003 .

[3]  Rama Chellappa,et al.  Visual tracking and recognition using appearance-adaptive models in particle filters , 2004, IEEE Transactions on Image Processing.

[4]  Azriel Rosenfeld,et al.  3D object tracking using shape-encoded particle propagation , 2001, Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001.

[5]  N. Gordon,et al.  Novel approach to nonlinear/non-Gaussian Bayesian state estimation , 1993 .

[6]  N. Nahi,et al.  Estimation-detection of object boundaries in noisy images , 1978 .

[7]  Rama Chellappa,et al.  Simultaneous background and foreground modeling for tracking in surveillance video , 2004, 2004 International Conference on Image Processing, 2004. ICIP '04..