Individualized boosting learning for classification

Abstract Boosting learning is a very popular learning approach in pattern recognition and machine learning. The current boosting algorithms aim to obtain a number of classifiers through learning the training samples in general. They can work well in the learning settings where the distribution of the training and testing samples is identical. The typical boosting algorithms do not pay attention to learning the test samples. They may not deal well with the large scale and high dimensional data set in which the distribution of the training and testing samples tends to be not identical. In order to deal well with the large scale and high dimensional data sets, such as face image data set, we investigate the boosting algorithms from a new perspective, and propose a novel boosting learning algorithm, termed individualized boosting learning (IBL), in this paper. The proposed IBL algorithm focuses on learning both the training and testing samples. For each test sample, IBL determines a part of training set, referred to as learning region, to perform boosting algorithm, and classify the test sample. Experiments on several popular real-world data sets show that the proposed IBL algorithm can achieve desirable recognition performance.

[1]  David Masip,et al.  Boosted Online Learning for Face Recognition , 2009, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[2]  Dao-Qing Dai,et al.  Two-Dimensional Maximum Margin Feature Extraction for Face Recognition , 2009, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[3]  Cynthia Rudin,et al.  The Dynamics of AdaBoost: Cyclic Behavior and Convergence of Margins , 2004, J. Mach. Learn. Res..

[4]  Caifeng Shan,et al.  Smile detection by boosting pixel differences , 2012, IEEE Transactions on Image Processing.

[5]  Yi Yao,et al.  Boosting for transfer learning with multiple sources , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[6]  Avinash C. Kak,et al.  PCA versus LDA , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  Haibo He,et al.  RAMOBoost: Ranked Minority Oversampling in Boosting , 2010, IEEE Transactions on Neural Networks.

[8]  Frank Nielsen,et al.  Boosting k-NN for Categorization of Natural Scenes , 2010, International Journal of Computer Vision.

[9]  Jitendra Malik,et al.  SVM-KNN: Discriminative Nearest Neighbor Classification for Visual Category Recognition , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[10]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[11]  Yi Liu,et al.  SemiBoost: Boosting for Semi-Supervised Learning , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  Chunhua Shen,et al.  On the Dual Formulation of Boosting Algorithms , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[13]  Lev Reyzin,et al.  Boosting on a Budget: Sampling for Feature-Efficient Prediction , 2011, ICML.

[14]  Lei Zhang,et al.  Sparse representation or collaborative representation: Which helps face recognition? , 2011, 2011 International Conference on Computer Vision.

[15]  L. Breiman Arcing classifier (with discussion and a rejoinder by the author) , 1998 .

[16]  Steven C. H. Hoi,et al.  MKBoost: A Framework of Multiple Kernel Boosting , 2013, IEEE Trans. Knowl. Data Eng..

[17]  David Zhang,et al.  Using the idea of the sparse representation to perform coarse-to-fine face recognition , 2013, Inf. Sci..

[18]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1997, EuroCOLT.

[19]  Gunnar Rätsch,et al.  Efficient Margin Maximizing with Boosting , 2005, J. Mach. Learn. Res..

[20]  David J. Kriegman,et al.  Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection , 1996, ECCV.

[21]  Honggang Zhang,et al.  Comments on "Globally Maximizing, Locally Minimizing: Unsupervised Discriminant Projection with Application to Face and Palm Biometrics" , 2007, IEEE Trans. Pattern Anal. Mach. Intell..

[22]  Yoram Singer,et al.  Improved Boosting Algorithms Using Confidence-rated Predictions , 1998, COLT' 98.

[23]  Konstantinos N. Plataniotis,et al.  Ensemble-based discriminant learning with boosting for face recognition , 2006, IEEE Transactions on Neural Networks.

[24]  J. Friedman Special Invited Paper-Additive logistic regression: A statistical view of boosting , 2000 .

[25]  Fumin Shen,et al.  {\cal U}Boost: Boosting with the Universum , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[26]  John J. Grefenstette,et al.  Credit assignment in rule discovery systems based on genetic algorithms , 1988, Machine Learning.

[27]  Ayhan Demiriz,et al.  Linear Programming Boosting via Column Generation , 2002, Machine Learning.

[28]  Jian Yang,et al.  A Two-Phase Test Sample Sparse Representation Method for Use With Face Recognition , 2011, IEEE Transactions on Circuits and Systems for Video Technology.

[29]  Mohammed Bennamoun,et al.  Linear Regression for Face Recognition , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[30]  Allen Y. Yang,et al.  Robust Face Recognition via Sparse Representation , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[31]  Qiang Yang,et al.  Boosting for transfer learning , 2007, ICML '07.

[32]  Jeng-Shyang Pan,et al.  From the idea of "sparse representation" to a representation-based transformation method for feature extraction , 2013, Neurocomputing.

[33]  M. Turk,et al.  Eigenfaces for Recognition , 1991, Journal of Cognitive Neuroscience.

[34]  Alejandro F. Frangi,et al.  Two-dimensional PCA: a new approach to appearance-based face representation and recognition , 2004 .

[35]  Osamu Watanabe,et al.  MadaBoost: A Modification of AdaBoost , 2000, COLT.

[36]  Nuno Vasconcelos,et al.  Cost-Sensitive Boosting , 2011, IEEE Transactions on Pattern Analysis and Machine Intelligence.