Adaptive Component Embedding for Unsupervised Domain Adaptation

Domain adaptation has obtained considerable interest from the literatures of multimedia, especially in cross-domain knowledge transfer problems. In this paper, we propose an effective yet time-saving approach, named Adaptive Component Embedding (ACE), for unsupervised domain adaptation. Specifically, ACE learns adaptive components across domains to embed all data in a shared subspace where the distribution divergence is mitigated and the underlying geometric structures in the local manifold are preserved. Then, an adaptive classifier is learned by using Representer Theorem in the Reproducing Kernel Hilbert Space (RKHS). The objective of our method can be efficiently solved in a closed form. Comprehensive experiments on both standard and large-scale datasets verify that ACE significantly outperforms previous state-of-the-art methods in terms of the classification accuracy and training time.

[1]  Jing Zhang,et al.  Joint Geometrical and Statistical Alignment for Visual Domain Adaptation , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Ivor W. Tsang,et al.  Domain Adaptation via Transfer Component Analysis , 2009, IEEE Transactions on Neural Networks.

[3]  Zi Huang,et al.  From Zero-Shot Learning to Cold-Start Recommendation , 2019, AAAI.

[4]  Sethuraman Panchanathan,et al.  Deep Hashing Network for Unsupervised Domain Adaptation , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Ke Lu,et al.  Joint Feature Selection and Structure Preservation for Domain Adaptation , 2016, IJCAI.

[6]  Ke Lu,et al.  Learning Distribution-Matched Landmarks for Unsupervised Domain Adaptation , 2018, DASFAA.

[7]  François Laviolette,et al.  Domain-Adversarial Training of Neural Networks , 2015, J. Mach. Learn. Res..

[8]  Ke Lu,et al.  Transfer Independently Together: A Generalized Framework for Domain Adaptation , 2019, IEEE Transactions on Cybernetics.

[9]  Don Coppersmith,et al.  Matrix multiplication via arithmetic progressions , 1987, STOC.

[10]  Yue Cao,et al.  Transferable Representation Learning with Deep Adaptation Networks , 2019, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  Trevor Darrell,et al.  Deep Domain Confusion: Maximizing for Domain Invariance , 2014, CVPR 2014.

[12]  Ke Lu,et al.  Heterogeneous Domain Adaptation Through Progressive Alignment , 2019, IEEE Transactions on Neural Networks and Learning Systems.

[13]  Mikhail Belkin,et al.  Consistency of spectral clustering , 2008, 0804.0678.

[14]  Philip S. Yu,et al.  Transfer Feature Learning with Joint Distribution Adaptation , 2013, 2013 IEEE International Conference on Computer Vision.

[15]  Vladimir Vapnik,et al.  Statistical learning theory , 1998 .

[16]  Lei Zhu,et al.  Online Cross-Modal Hashing for Web Image Retrieval , 2016, AAAI.

[17]  Trevor Darrell,et al.  DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition , 2013, ICML.

[18]  Ke Lu,et al.  Low-Rank Discriminant Embedding for Multiview Learning , 2017, IEEE Transactions on Cybernetics.

[19]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[20]  G. Wahba Spline models for observational data , 1990 .

[21]  Bernhard Schölkopf,et al.  A Kernel Method for the Two-Sample-Problem , 2006, NIPS.

[22]  Philip S. Yu,et al.  Adaptation Regularization: A General Framework for Transfer Learning , 2014, IEEE Transactions on Knowledge and Data Engineering.

[23]  Grace Wahba,et al.  Spline Models for Observational Data , 1990 .

[24]  Mikhail Belkin,et al.  Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples , 2006, J. Mach. Learn. Res..

[25]  Kate Saenko,et al.  Return of Frustratingly Easy Domain Adaptation , 2015, AAAI.

[26]  Trevor Darrell,et al.  Adapting Visual Category Models to New Domains , 2010, ECCV.

[27]  G. Griffin,et al.  Caltech-256 Object Category Dataset , 2007 .