Domain Generalization Based on Transfer Component Analysis

This paper investigates domain generalization: How to use knowledge acquired from related domains and apply it to new domains? Transfer Component Analysis (TCA) learns a shared subspace by minimizing the dissimilarities across domains, while maximally preserving the data variance. We propose Multi-TCA, an extension of TCA to multiple domains as well as Multi-SSTCA, which is an extension of TCA for semi-supervised learning. In addition to the original application of TCA for domain adaptation problems, we show that Multi-TCA can also be applied for domain generalization. Multi-TCA and Multi-SSTCA are evaluated on two publicly available datasets with the tasks of landmine detection and Parkinson telemonitoring. Experimental results demonstrate that Multi-TCA can improve predictive performance on previously unseen domains.

[1]  Qiang Yang,et al.  A Survey on Transfer Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.

[2]  Hao Hu,et al.  Transfer learning for WiFi-based indoor localization , 2008, AAAI 2008.

[3]  Bernhard Schölkopf,et al.  Kernel Principal Component Analysis , 1997, ICANN.

[4]  Bernhard Schölkopf,et al.  A Kernel Method for the Two-Sample-Problem , 2006, NIPS.

[5]  Max A. Little,et al.  Exploiting Nonlinear Recurrence and Fractal Scaling Properties for Voice Disorder Detection , 2007, Biomedical engineering online.

[6]  Igor V. Tetko,et al.  Inductive Transfer of Knowledge: Application of Multi-Task Learning and Feature Net Approaches to Model Tissue-Air Partition Coefficients , 2009, J. Chem. Inf. Model..

[7]  Gilles Blanchard,et al.  Generalizing from Several Related Classification Tasks to a New Unlabeled Sample , 2011, NIPS.

[8]  Philip S. Yu,et al.  Adaptation Regularization: A General Framework for Transfer Learning , 2014, IEEE Transactions on Knowledge and Data Engineering.

[9]  Geoffrey E. Hinton,et al.  Using Deep Belief Nets to Learn Covariance Kernels for Gaussian Processes , 2007, NIPS.

[10]  Steffen Bickel,et al.  Discriminative Learning Under Covariate Shift , 2009, J. Mach. Learn. Res..

[11]  Lorenzo Bruzzone,et al.  Relevant and invariant feature selection of hyperspectral images for domain generalization , 2014, 2014 IEEE Geoscience and Remote Sensing Symposium.

[12]  Kristen Grauman,et al.  Connecting the Dots with Landmarks: Discriminatively Learning Domain-Invariant Features for Unsupervised Domain Adaptation , 2013, ICML.

[13]  Bernhard Schölkopf,et al.  Correcting Sample Selection Bias by Unlabeled Data , 2006, NIPS.

[14]  Jianmin Wang,et al.  Transfer Learning with Graph Co-Regularization , 2012, IEEE Transactions on Knowledge and Data Engineering.

[15]  Gunnar Rätsch,et al.  An introduction to kernel-based learning algorithms , 2001, IEEE Trans. Neural Networks.

[16]  Lawrence Carin,et al.  Multi-Task Learning for Classification with Dirichlet Process Priors , 2007, J. Mach. Learn. Res..

[17]  Ivor W. Tsang,et al.  Domain Adaptation via Transfer Component Analysis , 2009, IEEE Transactions on Neural Networks.

[18]  Chang Wang,et al.  Heterogeneous Domain Adaptation Using Manifold Alignment , 2011, IJCAI.

[19]  Bernhard Schölkopf,et al.  Domain Generalization via Invariant Feature Representation , 2013, ICML.

[20]  Mikhail Belkin,et al.  Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples , 2006, J. Mach. Learn. Res..

[21]  Neil D. Lawrence,et al.  Dataset Shift in Machine Learning , 2009 .

[22]  Bernhard Schölkopf,et al.  Measuring Statistical Dependence with Hilbert-Schmidt Norms , 2005, ALT.