Common feature extraction in multi-source domains for transfer learning

In transfer learning scenarios, finding a common feature representation is crucial to tackle the problem of domain shift where the training (source domain) and test (target domain) sets have difference in their distribution. However, classical dimensionality reduction approaches such as Fisher Discriminant Analysis (FDA), are not in good yields whenever dealing with shift problem. In this paper we introduce CoMuT, a Common feature extraction in Multi-source domains for Transfer learning, that finds a common feature representation between different source and target domains. CoMuT projects the data into a latent space to reduce the drift in distributions across domains and concurrently preserves the separability between classes. CoMuT constructs the latent space in semi-supervised manner to bridge across domains and relate the different domains to each other. The projected domains have distribution similarity and classical machine learning methods can be applied on them to classify target data. Empirical results indicate that CoMuT outperforms other dimensionality reduction methods on different artificial and real datasets.

[1]  Koby Crammer,et al.  Online Methods for Multi-Domain Learning and Adaptation , 2008, EMNLP.

[2]  Trevor Darrell,et al.  Discovering Latent Domains for Multisource Domain Adaptation , 2012, ECCV.

[3]  R. Fisher THE USE OF MULTIPLE MEASUREMENTS IN TAXONOMIC PROBLEMS , 1936 .

[4]  Yuan Shi,et al.  Geodesic flow kernel for unsupervised domain adaptation , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[5]  David W. Jacobs,et al.  Generalized Multiview Analysis: A discriminative latent space , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[6]  Steffen Bickel,et al.  Discriminative learning for differing training and test distributions , 2007, ICML '07.

[7]  Hal Daumé,et al.  Frustratingly Easy Domain Adaptation , 2007, ACL.

[8]  Trevor Darrell,et al.  Adapting Visual Category Models to New Domains , 2010, ECCV.

[9]  Kristen Grauman,et al.  Connecting the Dots with Landmarks: Discriminatively Learning Domain-Invariant Features for Unsupervised Domain Adaptation , 2013, ICML.

[10]  Shiliang Sun,et al.  Transferable Discriminative Dimensionality Reduction , 2011, 2011 IEEE 23rd International Conference on Tools with Artificial Intelligence.

[11]  Ivor W. Tsang,et al.  Domain Transfer Multiple Kernel Learning , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  Heng Tao Shen,et al.  Principal Component Analysis , 2009, Encyclopedia of Biometrics.

[13]  Ivor W. Tsang,et al.  Domain Adaptation via Transfer Component Analysis , 2009, IEEE Transactions on Neural Networks.

[14]  Motoaki Kawanabe,et al.  Direct Importance Estimation with Model Selection and Its Application to Covariate Shift Adaptation , 2007, NIPS.

[15]  Yishay Mansour,et al.  Domain Adaptation with Multiple Sources , 2008, NIPS.

[16]  Shinichi Nakajima,et al.  Semi-supervised local Fisher discriminant analysis for dimensionality reduction , 2009, Machine Learning.

[17]  Brian C. Lovell,et al.  Unsupervised Domain Adaptation by Domain Invariant Projection , 2013, 2013 IEEE International Conference on Computer Vision.

[18]  Yiu-ming Cheung,et al.  Feature Selection for Local Learning Based Clustering , 2009, PAKDD.

[19]  Erik G. Learned-Miller,et al.  Online domain adaptation of a pre-trained cascade of classifiers , 2011, CVPR 2011.

[20]  Rama Chellappa,et al.  Domain adaptation for object recognition: An unsupervised approach , 2011, 2011 International Conference on Computer Vision.

[21]  Rong Yan,et al.  Cross-domain video concept detection using adaptive svms , 2007, ACM Multimedia.