Multi-view Regression Via Canonical Correlation Analysis
暂无分享,去创建一个
[1] H. Hotelling. The most predictable criterion. , 1935 .
[2] Geoffrey E. Hinton,et al. Self-organizing neural network that discovers surfaces in random-dot stereograms , 1992, Nature.
[3] David Yarowsky,et al. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods , 1995, ACL.
[4] Suzanna Becker,et al. Mutual information maximization: models of cortical self-organization. , 1996, Network.
[5] Avrim Blum,et al. The Bottleneck , 2021, Monopsony Capitalism.
[6] Naftali Tishby,et al. The information bottleneck method , 2000, ArXiv.
[7] Sanjoy Dasgupta,et al. PAC Generalization Bounds for Co-training , 2001, NIPS.
[8] Liang-sheng Lu,et al. [Expression of fusion proteins in beta(2)GP I gene-transfected HEp-2 cells and its clinical application]. , 2002, Zhonghua yi xue za zhi.
[9] Gal Chechik,et al. Information Bottleneck for Gaussian Variables , 2003, J. Mach. Learn. Res..
[10] John Shawe-Taylor,et al. Canonical Correlation Analysis: An Overview with Application to Learning Methods , 2004, Neural Computation.
[11] Steven P. Abney. Understanding the Yarowsky Algorithm , 2004, CL.
[12] John Shawe-Taylor,et al. Two view learning: SVM-2K, Theory and Practice , 2005, NIPS.
[13] Mikhail Belkin,et al. A Co-Regularization Approach to Semi-supervised Learning with Multiple Views , 2005 .
[14] Maria-Florina Balcan,et al. A PAC-Style Model for Learning from Labeled and Unlabeled Data , 2005, COLT.
[15] Tong Zhang,et al. Learning Bounds for Kernel Regression Using Effective Data Dimensionality , 2005, Neural Computation.
[16] Alexander Zien,et al. Semi-Supervised Learning , 2006 .
[17] Thomas Gärtner,et al. Efficient co-regularised least squares regression , 2006, ICML.
[18] Peter L. Bartlett,et al. The Rademacher Complexity of Co-Regularized Kernel Classes , 2007, AISTATS.