Structure-Preserved Multi-source Domain Adaptation

Domain adaptation has achieved promising results in many areas, such as image classification and object recognition. Although a lot of algorithms have been proposed to solve the task with different domain distributions, it remains a challenge for multi-source unsupervised domain adaptation. In addition, most of the existing algorithms learn a classifier on the source domain and predict the labels for the target data, which indicates that only the knowledge derived from the hyperplane is transferred to the target domain and the structure information is ignored. In light of this, we propose a novel algorithm for multi-source unsupervised domain adaptation. Generally speaking, we aim to preserve the whole structure from source domains and transfer it to serve the task on the target domain. The source and target data are put together for clustering, which simultaneously explores the structures of the source and target domains. The structure-preserved information from source domain further guides the clustering process on the target domain. Extensive experiments on two widely used databases on object recognition and face identification show the substantial improvement of our proposed approach over several state-of-the-art methods. Especially, our algorithm can take use of multi-source domains and achieve robust and better performance compared with the single source domain adaptation methods.

[1]  Yong Luo,et al.  Decomposition-Based Transfer Distance Metric Learning for Image Classification , 2014, IEEE Transactions on Image Processing.

[2]  Yishay Mansour,et al.  Domain Adaptation with Multiple Sources , 2008, NIPS.

[3]  Ming Shao,et al.  Latent Low-Rank Transfer Subspace Learning for Missing Modality Recognition , 2014, AAAI.

[4]  Dong Liu,et al.  Robust visual domain adaptation with low-rank reconstruction , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[5]  Ivor W. Tsang,et al.  Domain Adaptation via Transfer Component Analysis , 2009, IEEE Transactions on Neural Networks.

[6]  Yun Fu,et al.  Clustering with Partition Level Side Information , 2015, 2015 IEEE International Conference on Data Mining.

[7]  Dacheng Tao,et al.  Algorithm-Dependent Generalization Bounds for Multi-Task Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[8]  Trevor Darrell,et al.  Discovering Latent Domains for Multisource Domain Adaptation , 2012, ECCV.

[9]  Rong Yan,et al.  Cross-domain video concept detection using adaptive svms , 2007, ACM Multimedia.

[10]  Bernhard Schölkopf,et al.  Domain Adaptation with Conditional Transferable Components , 2016, ICML.

[11]  Victor S. Lempitsky,et al.  Unsupervised Domain Adaptation by Backpropagation , 2014, ICML.

[12]  Rama Chellappa,et al.  Domain adaptation for object recognition: An unsupervised approach , 2011, 2011 International Conference on Computer Vision.

[13]  Kristen Grauman,et al.  Reshaping Visual Datasets for Domain Adaptation , 2013, NIPS.

[14]  Boris G. Mirkin,et al.  Reinterpreting the Category Utility Function , 2001, Machine Learning.

[15]  Ivor W. Tsang,et al.  Domain adaptation from multiple sources via auxiliary classifiers , 2009, ICML '09.

[16]  Hui Xiong,et al.  A Theoretic Framework of K-Means-Based Consensus Clustering , 2013, IJCAI.

[17]  Rama Chellappa,et al.  Unsupervised Adaptation Across Domain Shifts by Generating Intermediate Data Representations , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  David Zhang,et al.  Fisher Discrimination Dictionary Learning for sparse representation , 2011, 2011 International Conference on Computer Vision.

[19]  Lorenzo Bruzzone,et al.  Domain Adaptation Problems: A DASVM Classification Technique and a Circular Validation Strategy , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[20]  Qiang Yang,et al.  Transfer Learning via Dimensionality Reduction , 2008, AAAI.

[21]  Ming Shao,et al.  Spectral Bisection Tree Guided Deep Adaptive Exemplar Autoencoder for Unsupervised Domain Adaptation , 2016, AAAI.

[22]  Trevor Darrell,et al.  Adapting Visual Category Models to New Domains , 2010, ECCV.

[23]  Junjie Wu,et al.  Spectral Ensemble Clustering , 2015, KDD.

[24]  Ivor W. Tsang,et al.  Visual Event Recognition in Videos by Learning from Web Data , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[25]  Shiguang Shan,et al.  Generalized Unsupervised Manifold Alignment , 2014, NIPS.

[26]  Hui Xiong,et al.  Adapting the right measures for K-means clustering , 2009, KDD.

[27]  Dong Xu,et al.  Exploiting Low-Rank Structure from Latent Domains for Domain Generalization , 2014, ECCV.

[28]  Philip S. Yu,et al.  Transfer Feature Learning with Joint Distribution Adaptation , 2013, 2013 IEEE International Conference on Computer Vision.

[29]  Rama Chellappa,et al.  Subspace Interpolation via Dictionary Learning for Unsupervised Domain Adaptation , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[30]  Rynson W. H. Lau,et al.  Knowledge and Data Engineering for e-Learning Special Issue of IEEE Transactions on Knowledge and Data Engineering , 2008 .

[31]  Yuan Shi,et al.  Geodesic flow kernel for unsupervised domain adaptation , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[32]  Ming Shao,et al.  Infinite Ensemble for Image Clustering , 2016, KDD.

[33]  Qiang Yang,et al.  A Survey on Transfer Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.

[34]  Ming Shao,et al.  Generalized Transfer Subspace Learning Through Low-Rank Constraint , 2014, International Journal of Computer Vision.

[35]  Dacheng Tao,et al.  Bregman Divergence-Based Regularization for Transfer Subspace Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.

[36]  Rama Chellappa,et al.  Generalized Domain-Adaptive Dictionaries , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[37]  Yun Fu,et al.  Unsupervised transfer learning via Low-Rank Coding for image clustering , 2016, 2016 International Joint Conference on Neural Networks (IJCNN).