A New Object Tracking Framework for Interest Point Based Feature Extraction Algorithms

This paper presents a novel object tracking framework for interest point based feature extracting algorithms. The proposed framework uses the feature extracting algorithm without making any changes and it relies on outlier detection, object modelling, and object tracking. At first, the keypoints are extracted by using a feature extraction algorithm. Then, incorrect keypoint matches are detected by the DBScan algorithm. The second step of our tracking framework is object modelling. The object model is defined as a bounding box. The box model has six points and each of these points has its own Gaussian model. Finally, the Gaussian model is performed for object tracking. In object tracking, the old five values are retained to detect incorrect position information. Thus, while the object movements are softened, the instant deviations are eliminated also. Our interest point based object tracking framework (IPBOT) works with any interest point based feature extracting algorithm. Thus, a new algorithm can be added to the object tracking framework with a short integration process. The experiment results show that the proposed tracker significantly improves the success rate of the object tracking.

[1]  Yan Ke,et al.  PCA-SIFT: a more distinctive representation for local image descriptors , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[2]  Tomaso A. Poggio,et al.  A general framework for object detection , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[3]  Takeo Kanade,et al.  Neural Network-Based Face Detection , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  Sean M Jurgensen,et al.  The Rotated Speeded-Up Robust Features Algorithm (R-SURF) , 2014 .

[5]  Jitendra Malik,et al.  Normalized cuts and image segmentation , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[6]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[7]  Luc Van Gool,et al.  HPAT Indexing for Fast Object/Scene Recognition Based on Local Appearance , 2003, CIVR.

[8]  Séverine Dubuisson,et al.  A survey of datasets for visual tracking , 2015, Machine Vision and Applications.

[9]  Luo Juan,et al.  A comparison of SIFT, PCA-SIFT and SURF , 2009 .

[10]  Guijin Wang,et al.  A new framework for on-line object tracking based on SURF , 2011, Pattern Recognit. Lett..

[11]  Geoff Wyvill,et al.  SIFT and SURF Performance Evaluation against Various Image Deformations on Benchmark Dataset , 2011, 2011 International Conference on Digital Image Computing: Techniques and Applications.

[12]  Pengpeng Zhao,et al.  A Comparative Study of SIFT and its Variants , 2013 .

[13]  Sidi Ahmed Mahmoudi,et al.  Real-time motion tracking using optical flow on multiple GPUs , 2014 .

[14]  Maridalia Guerrero Pena A comparative study of three image matching algorithms: SIFT, SURF, and FAST , 2011 .

[15]  W. Eric L. Grimson,et al.  Adaptive background mixture models for real-time tracking , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[16]  Luc Van Gool,et al.  Speeded-Up Robust Features (SURF) , 2008, Comput. Vis. Image Underst..

[17]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[18]  Matthew A. Brown,et al.  Automatic Panoramic Image Stitching using Invariant Features , 2007, International Journal of Computer Vision.

[19]  Huiyu Zhou,et al.  Object tracking using SIFT features and mean shift , 2009, Comput. Vis. Image Underst..

[20]  Jürgen Schmidhuber,et al.  Multi-column deep neural networks for image classification , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[21]  Hans-Peter Kriegel,et al.  A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise , 1996, KDD.

[22]  Jean-Michel Morel,et al.  ASIFT: An Algorithm for Fully Affine Invariant Comparison , 2011, Image Process. Line.

[23]  Dorin Comaniciu,et al.  Mean Shift: A Robust Approach Toward Feature Space Analysis , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[24]  Carlo Tomasi,et al.  Good features to track , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[25]  Guillermo Sapiro,et al.  Geodesic Active Contours , 1995, International Journal of Computer Vision.

[26]  Shaogang Gong,et al.  A highly efficient block-based dynamic background model , 2005, IEEE Conference on Advanced Video and Signal Based Surveillance, 2005..

[27]  Aly A. Farag,et al.  CSIFT: A SIFT Descriptor with Color Invariant Characteristics , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[28]  Demetri Terzopoulos,et al.  Snakes: Active contour models , 2004, International Journal of Computer Vision.