Boosting Separability in Semisupervised Learning for Object Classification

Boosting algorithms, especially AdaBoost, have attracted great attention in computer vision. In the early version of boosting algorithms, the weak classifier selection and the strong classifier learning are linked together. It has been demonstrated that decoupling of these two processes can provide more flexibility for training a better classifier. In these studies, linear discriminant analysis (LDA) has been adopted to select weak classifiers independently based on class separability rather than a training error that occurs normally in AdaBoost. It is observed that LDA is successful only if a large number of labeled training samples is available. However, a large-scale labeled training set is not always available in many computer vision applications such as object classification. To tackle this problem, this paper proposes semisupervised subspace learning combined with a boosting framework for object classification, through which unlabeled data can participate in the boosting training to compensate for the lack of enough labeled data. With the proposed framework, this paper develops three various approaches that utilize unlabeled data in different ways. According to the experiments on several public image data sets, the proposed methods achieve superior performance over AdaBoost and existing semisupervised algorithms.

[1]  Mikhail Belkin,et al.  Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples , 2006, J. Mach. Learn. Res..

[2]  Horst Bischof,et al.  SERBoost: Semi-supervised Boosting with Expectation Regularization , 2008, ECCV.

[3]  Horst Bischof,et al.  Semi-supervised On-Line Boosting for Robust Tracking , 2008, ECCV.

[4]  Heng Wang,et al.  Boosting Incremental Semi-supervised Discriminant Analysis for Tracking , 2010, 2010 20th International Conference on Pattern Recognition.

[5]  Jian Zhang,et al.  Fast Pedestrian Detection Using a Cascade of Boosted Covariance Features , 2008, IEEE Transactions on Circuits and Systems for Video Technology.

[6]  Jian Zhang,et al.  Efficiently training a better visual detector with sparse eigenvectors , 2009, CVPR.

[7]  Peter L. Bartlett,et al.  Boosting Algorithms as Gradient Descent in Function Space , 2007 .

[8]  Feiping Nie,et al.  Efficient semi-supervised feature selection with noise insensitive trace ratio criterion , 2013, Neurocomputing.

[9]  Nuno Vasconcelos,et al.  TaylorBoost: First and second-order boosting algorithms with explicit margin control , 2011, CVPR 2011.

[10]  James M. Rehg,et al.  Fast Asymmetric Learning for Cascade Face Detection , 2008, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  Johannes Stallkamp,et al.  Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition , 2012, Neural Networks.

[12]  Harry Shum,et al.  Kullback-Leibler boosting , 2003, 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings..

[13]  Dong Xu,et al.  Semi-Supervised Dimension Reduction Using Trace Ratio Criterion , 2012, IEEE Transactions on Neural Networks and Learning Systems.

[14]  Yuxiao Hu,et al.  Face recognition using Laplacianfaces , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[15]  Deng Cai,et al.  Laplacian Score for Feature Selection , 2005, NIPS.

[16]  Tony Jebara,et al.  Variance Penalizing AdaBoost , 2011, NIPS.

[17]  Shinichi Nakajima,et al.  Semi-Supervised Local Fisher Discriminant Analysis for Dimensionality Reduction , 2008, PAKDD.

[18]  Yoav Freund,et al.  Boosting: Foundations and Algorithms , 2012 .

[19]  Christophe Ambroise,et al.  Semi-supervised MarginBoost , 2001, NIPS.

[20]  Paul A. Viola,et al.  Robust Real-Time Face Detection , 2001, Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001.

[21]  Stan Z. Li,et al.  Performance Evaluation of the Nearest Feature Line Method in Image Classification and Retrieval , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[22]  Chunhua Shen,et al.  LACBoost and FisherBoost: Optimally Building Cascade Classifiers , 2010, ECCV.

[23]  Jiawei Han,et al.  Semi-supervised Discriminant Analysis , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[24]  Y. Freund,et al.  Discussion of the Paper \additive Logistic Regression: a Statistical View of Boosting" By , 2000 .

[25]  Dariu Gavrila,et al.  An Experimental Study on Pedestrian Classification , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[26]  Paul A. Viola,et al.  Boosting Image Retrieval , 2000, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).

[27]  Paul A. Viola,et al.  Fast and Robust Classification using Asymmetric AdaBoost and a Detector Cascade , 2001, NIPS.

[28]  David A. Forsyth,et al.  ManifoldBoost: stagewise function approximation for fully-, semi- and un-supervised learning , 2008, ICML '08.

[29]  Tomer Hertz,et al.  Learning a Mahalanobis Metric from Equivalence Constraints , 2005, J. Mach. Learn. Res..

[30]  Gunnar Rätsch,et al.  Boosting Algorithms for Maximizing the Soft Margin , 2007, NIPS.

[31]  Alexander Zien,et al.  Semi-Supervised Learning , 2006 .

[32]  Ke Chen,et al.  Semi-Supervised Learning via Regularized Boosting Working on Multiple Semi-Supervised Assumptions , 2011, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[33]  Ayhan Demiriz,et al.  Exploiting unlabeled data in ensemble methods , 2002, KDD.

[34]  Vladimir Cherkassky,et al.  Gender classification of human faces using inference through contradictions , 2008, 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence).

[35]  Trevor Darrell,et al.  Semi-supervised Domain Adaptation with Instance Constraints , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[36]  Yi Liu,et al.  SemiBoost: Boosting for Semi-Supervised Learning , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[37]  Boonserm Kijsirikul,et al.  A unified semi-supervised dimensionality reduction framework for manifold learning , 2008, Neurocomputing.