Real-Time Uncharacteristic-Part Tracking with a Point Set

In this research, we focus on how to track a target region that lies next to similar regions (e.g. a forearm and an upper arm) in zoomin images. Many previous tracking methods express the target region (i.e. a part in a human body) with a single model such as an ellipse, a rectangle, and a deformable closed region. With the single model, however, it is difficult to track the target region in zoom-in images without confusing it and its neighboring similar regions (e.g. “a forearm and an upper arm” and “a small region in a torso and its neighboring regions”) because they might have the same texture patterns and do not have the detectable border between them. In our method, a group of feature points in a target region is extracted and tracked as the model of the target. Small differences between the neighboring regions can be verified by focusing only on the feature points. In addition, (1) the stability of tracking is improved using particle filtering and (2) tracking robust to occlusions is realized by removing unreliable points using random sampling. Experimental results demonstrate the effectiveness of our method even when occlusions occur. key words: real-time tracking, zoom-in camera, point-set tracking

[1]  James J. Little,et al.  A Boosted Particle Filter: Multitarget Detection and Tracking , 2004, ECCV.

[2]  Bohyung Han,et al.  Probabilistic fusion-based parameter estimation for visual tracking , 2009, Comput. Vis. Image Underst..

[3]  Patrick Pérez,et al.  Maintaining multimodality through mixture tracking , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[4]  N. Peterfreund The velocity snake: Deformable contour for tracking in spatio-velocity space , 1997 .

[5]  Lorenzo Torresani,et al.  Tracking and modeling non-rigid objects with rank constraints , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[6]  Dorin Comaniciu,et al.  Kernel-Based Object Tracking , 2003, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  Ronald Poppe,et al.  Vision-based human motion analysis: An overview , 2007, Comput. Vis. Image Underst..

[8]  Tarak Gandhi,et al.  Distributed interactive video arrays for event capture and enhanced situational awareness , 2005, IEEE Intelligent Systems.

[9]  Howard Wainer,et al.  Robust Regression & Outlier Detection , 1988 .

[10]  Thomas B. Moeslund,et al.  A Survey of Computer Vision-Based Human Motion Capture , 2001, Comput. Vis. Image Underst..

[11]  Peter K. Allen,et al.  Design of a partitioned visual feedback controller , 1998, Proceedings. 1998 IEEE International Conference on Robotics and Automation (Cat. No.98CH36146).

[12]  Simon Baker,et al.  Lucas-Kanade 20 Years On: A Unifying Framework , 2004, International Journal of Computer Vision.

[13]  Carlo Tomasi,et al.  Good features to track , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[14]  Michael Isard,et al.  CONDENSATION—Conditional Density Propagation for Visual Tracking , 1998, International Journal of Computer Vision.

[15]  Vincent Lepetit,et al.  Real-time learning of accurate patch rectification , 2009, CVPR.

[16]  Kazuhiko Kawamoto Guided Importance Sampling Based Particle Filtering for Visual Tracking , 2006, PSIVT.

[17]  P. Fua,et al.  Real-time learning of accurate patch rectification , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[18]  Christopher G. Harris,et al.  A Combined Corner and Edge Detector , 1988, Alvey Vision Conference.

[19]  Alain Trémeau,et al.  Feature Points Tracking: Robustness to Specular Highlights and Lighting Changes , 2006, ECCV.

[20]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[21]  Takahiro Ishikawa,et al.  The template update problem , 2004, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[22]  Kohei Inoue,et al.  Separation of Multiple Objects in Motion Images by Clustering , 2001, ICCV.