Cross-domain representation-learning framework with combination of class-separate and domain-merge objectives

Recently, cross-domain learning has become one of the most important research directions in data mining and machine learning. In multi-domain learning, one problem is that the classification patterns and data distributions are different among domains, which leads to that the knowledge (e.g. classification hyperplane) can not be directly transferred from one domain to another. This paper proposes a framework to combine class-separate objectives (maximize separability among classes) and domain-merge objectives (minimize separability among domains) to achieve cross-domain representation learning. Three special methods called DMCS_CSF, DMCS_FDA and DMCS_PCDML upon this framework are given and the experimental results valid their effectiveness.

[1]  Shuicheng Yan,et al.  Visual classification with multi-task joint sparse representation , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[2]  Mikhail Belkin,et al.  Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples , 2006, J. Mach. Learn. Res..

[3]  Trevor Darrell,et al.  Nearest-Neighbor Methods in Learning and Vision , 2008, IEEE Trans. Neural Networks.

[4]  Mark A. Hall,et al.  Correlation-based Feature Selection for Machine Learning , 2003 .

[5]  B. Scholkopf,et al.  Fisher discriminant analysis with kernels , 1999, Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468).

[6]  Shiliang Sun,et al.  A subject transfer framework for EEG classification , 2012, Neurocomputing.

[7]  Eric O. Postma,et al.  Dimensionality Reduction: A Comparative Review , 2008 .

[8]  Huan Liu,et al.  Feature Selection for Classification , 1997, Intell. Data Anal..

[9]  Yishay Mansour,et al.  Domain Adaptation with Multiple Sources , 2008, NIPS.

[10]  Corinna Cortes,et al.  Support-Vector Networks , 1995, Machine Learning.

[11]  Tomer Hertz,et al.  Learning Distance Functions using Equivalence Relations , 2003, ICML.

[12]  John Blitzer,et al.  Domain Adaptation with Structural Correspondence Learning , 2006, EMNLP.

[13]  Gunnar Rätsch,et al.  The SHOGUN Machine Learning Toolbox , 2010, J. Mach. Learn. Res..

[14]  Shiliang Sun,et al.  Transferable Discriminative Dimensionality Reduction , 2011, 2011 IEEE 23rd International Conference on Tools with Artificial Intelligence.

[15]  Rong Jin,et al.  Distance Metric Learning: A Comprehensive Survey , 2006 .

[16]  Ron Kohavi,et al.  A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection , 1995, IJCAI.

[17]  Stephen Lin,et al.  Graph Embedding and Extensions: A General Framework for Dimensionality Reduction , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  Ann B. Lee,et al.  Diffusion maps and coarse-graining: a unified framework for dimensionality reduction, graph partitioning, and data set parameterization , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[19]  Ian H. Witten,et al.  The WEKA data mining software: an update , 2009, SKDD.

[20]  Gunnar Rätsch,et al.  An introduction to kernel-based learning algorithms , 2001, IEEE Trans. Neural Networks.

[21]  Daniel Marcu,et al.  Domain Adaptation for Statistical Classifiers , 2006, J. Artif. Intell. Res..

[22]  Gerard Salton,et al.  Term-Weighting Approaches in Automatic Text Retrieval , 1988, Inf. Process. Manag..

[23]  F. Gianfelici,et al.  Nearest-Neighbor Methods in Learning and Vision (Shakhnarovich, G. et al., Eds.; 2006) [Book review] , 2008 .

[24]  Koby Crammer,et al.  Online Methods for Multi-Domain Learning and Adaptation , 2008, EMNLP.

[25]  Michael I. Jordan,et al.  Distance Metric Learning with Application to Clustering with Side-Information , 2002, NIPS.