Laplacian regularized co-training

Co-training is one promising paradigm of semi-supervised learning and has drawn considerable attentions and interests in recent years. It usually works in an iterative manner on two disjoint view features, in which two classifiers are trained on the different views and teach each other by adding the predictions of unlabeled data to the training set of the other view. However, the classifier performs not well with small number of labeled examples especially in the first rounds of interation. In this paper, we present Laplacian regularized co-training(LapCo) to address the above problem in standard co-training. During the training process, LapCo employs Laplacian regularization into the classifier to significantly boost the classification performance. The experiments on three popular UCI repository datasets are conducted and show that the proposed LapCo outperforms the traditional co-training method.

[1]  Avrim Blum,et al.  The Bottleneck , 2021, Monopsony Capitalism.

[2]  Zhi-Hua Zhou,et al.  Tri-training: exploiting unlabeled data using three classifiers , 2005, IEEE Transactions on Knowledge and Data Engineering.

[3]  L. Rosasco,et al.  Manifold Regularization , 2007 .

[4]  Yixin Chen,et al.  Automatic Feature Decomposition for Single View Co-training , 2011, ICML.

[5]  Xuelong Li,et al.  Asymmetric bagging and random subspace for support vector machines-based relevance feedback in image retrieval , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  Michalis Vazirgiannis,et al.  Using Tri-Training and Support Vector Machines for addressing the ECML-PKDD 2006 Discovery Challenge , 2006 .

[7]  Rayid Ghani,et al.  Analyzing the effectiveness and applicability of co-training , 2000, CIKM '00.

[8]  Yan Zhou,et al.  Enhancing Supervised Learning with Unlabeled Data , 2000, ICML.

[9]  Yuan Yan Tang,et al.  Multiview Hessian discriminative sparse coding for image annotation , 2013, Comput. Vis. Image Underst..

[10]  Anoop Sarkar,et al.  Corrected Co-training for Statistical Parsers , 2003 .

[11]  Sanjoy Dasgupta,et al.  PAC Generalization Bounds for Co-training , 2001, NIPS.

[12]  Paul A. Viola,et al.  Unsupervised improvement of visual detectors using cotraining , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[13]  Catherine Blake,et al.  UCI Repository of machine learning databases , 1998 .

[14]  Jun Du,et al.  When Does Cotraining Work in Real Data? , 2011, IEEE Transactions on Knowledge and Data Engineering.

[15]  Zhi-Hua Zhou,et al.  Semi-supervised document retrieval , 2009, Inf. Process. Manag..

[16]  Irena Koprinska,et al.  Co-training with a Single Natural Feature Set Applied to Email Classification , 2004, IEEE/WIC/ACM International Conference on Web Intelligence (WI'04).

[17]  Tao Mei,et al.  Graph-based semi-supervised learning with multiple labels , 2009, J. Vis. Commun. Image Represent..

[18]  권홍우,et al.  Bootstrapping , 2002, ACL.

[19]  Anoop Sarkar,et al.  Applying Co-Training Methods to Statistical Parsing , 2001, NAACL.

[20]  Mikhail Belkin,et al.  Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples , 2006, J. Mach. Learn. Res..

[21]  Zhi-Hua Zhou,et al.  Enhancing relevance feedback in image retrieval using unlabeled data , 2006, ACM Trans. Inf. Syst..

[22]  Xiaojin Zhu,et al.  --1 CONTENTS , 2006 .

[23]  Mark Steedman,et al.  Bootstrapping statistical parsers from small datasets , 2003, EACL.

[24]  Weifeng Liu,et al.  Multiview Hessian Regularization for Image Annotation , 2013, IEEE Transactions on Image Processing.

[25]  Claire Cardie,et al.  Limitations of Co-Training for Natural Language Learning from Large Datasets , 2001, EMNLP.

[26]  Maria-Florina Balcan,et al.  Co-Training and Expansion: Towards Bridging Theory and Practice , 2004, NIPS.