Selective Multi-cotraining for Video Concept Detection

Research interest in cotraining is increasing which combines information from usually two classifiers to iteratively increase training resources and strengthen the classifiers. We try to select classifiers for cotraining when more than two representations of the data are available. The classifier based on the selected representation or data descriptor is expected to provide the most complementary information as new labels for the target classifier. These labels are critical for the next learning iteration. We present two criteria to select the complementary classifier where classification results on a validation set are used to calculate statistics for all the available classifiers. These statistics are used not only to pick the best classifier but also ascertain the number of new labels to be added for the target classifier. We demonstrate the effectiveness of classifier selection for semantic indexing task on the TRECIVD 2013 dataset and compare it to the self-training.

[1]  Avrim Blum,et al.  The Bottleneck , 2021, Monopsony Capitalism.

[2]  James Allan,et al.  A comparison of statistical significance tests for information retrieval evaluation , 2007, CIKM '07.

[3]  David G. Lowe,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004, International Journal of Computer Vision.

[4]  Georges Quénot,et al.  TRECVID 2015 - An Overview of the Goals, Tasks, Data, Evaluation Mechanisms and Metrics , 2011, TRECVID.

[5]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[6]  Xirong Li,et al.  Classifying tag relevance with relevant positive and negative examples , 2013, ACM Multimedia.

[7]  Koen E. A. van de Sande,et al.  Empowering Visual Categorization With the GPU , 2011, IEEE Transactions on Multimedia.

[8]  Ronald Phlypo,et al.  Adaptive feature split selection for co-training: Application to tire irregular wear classification , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[9]  Rayid Ghani,et al.  Analyzing the effectiveness and applicability of co-training , 2000, CIKM '00.

[10]  Jack Y. Yang,et al.  Feature Selection for Co-Training: A QSAR Study , 2007, IC-AI.

[11]  Andrea Vedaldi,et al.  Vlfeat: an open and portable library of computer vision algorithms , 2010, ACM Multimedia.

[12]  Rong Yan,et al.  Co-training non-robust classifiers for video semantic concept detection , 2005, IEEE International Conference on Image Processing 2005.

[13]  Yoram Singer,et al.  Pegasos: primal estimated sub-gradient solver for SVM , 2011, Math. Program..

[14]  Rong Yan,et al.  Semi-supervised cross feature learning for semantic concept detection in videos , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[15]  Paul Over,et al.  TRECVID 2008 - Goals, Tasks, Data, Evaluation Mechanisms and Metrics , 2010, TRECVID.