Latent Elastic-Net Transfer Learning

Subspace learning based transfer learning methods commonly find a common subspace where the discrepancy of the source and target domains is reduced. The final classification is also performed in such subspace. However, the minimum discrepancy does not guarantee the best classification performance and thus the common subspace may be not the best discriminative. In this paper, we propose a latent elastic-net transfer learning (LET) method by simultaneously learning a latent subspace and a discriminative subspace. Specifically, the data from different domains can be well interlaced in the latent subspace by minimizing Maximum Mean Discrepancy (MMD). Since the latent subspace decouples inputs and outputs and, thus a more compact data representation is obtained for discriminative subspace learning. Based on the latent subspace, we further propose a low-rank constraint based matrix elastic-net regression to learn another subspace in which the intrinsic intra-class structure correlations of data from different domains is well captured. In doing so, a better discriminative alignment is guaranteed and thus LET finally learns another discriminative subspace for classification. Experiments on visual domains adaptation tasks show the superiority of the proposed LET method.

[1]  H. Zou,et al.  Regularization and variable selection via the elastic net , 2005 .

[2]  Jun Huan,et al.  Knowledge Transfer with Low-Quality Data: A Feature Extraction Issue , 2011, IEEE Transactions on Knowledge and Data Engineering.

[3]  David Zhang,et al.  LSDT: Latent Sparse Domain Transfer Learning for Visual Adaptation , 2016, IEEE Transactions on Image Processing.

[4]  Ivan Laptev,et al.  Learning and Transferring Mid-level Image Representations Using Convolutional Neural Networks , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[5]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Stephen P. Boyd,et al.  Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..

[7]  Jianmin Wang,et al.  Transferable Curriculum for Weakly-Supervised Domain Adaptation , 2019, AAAI.

[8]  Zhi-Quan Luo,et al.  On the linear convergence of the alternating direction method of multipliers , 2012, Mathematical Programming.

[9]  Rama Chellappa,et al.  Domain adaptation for object recognition: An unsupervised approach , 2011, 2011 International Conference on Computer Vision.

[10]  Trevor Darrell,et al.  Adapting Visual Category Models to New Domains , 2010, ECCV.

[11]  Deepak S. Turaga,et al.  Cross domain distribution adaptation via kernel mapping , 2009, KDD.

[12]  Lorenzo Bruzzone,et al.  Domain Adaptation Problems: A DASVM Classification Technique and a Circular Validation Strategy , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[13]  Philip S. Yu,et al.  Domain Invariant Transfer Kernel Learning , 2015, IEEE Transactions on Knowledge and Data Engineering.

[14]  Dong Liu,et al.  Robust visual domain adaptation with low-rank reconstruction , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[15]  Trevor Darrell,et al.  Deep Domain Confusion: Maximizing for Domain Invariance , 2014, CVPR 2014.

[16]  Philip S. Yu,et al.  Transfer Feature Learning with Joint Distribution Adaptation , 2013, 2013 IEEE International Conference on Computer Vision.

[17]  Larry S. Davis,et al.  Learning Structured Low-Rank Representations for Image Classification , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[18]  Ramesh Nallapati,et al.  A Comparative Study of Methods for Transductive Transfer Learning , 2007, Seventh IEEE International Conference on Data Mining Workshops (ICDMW 2007).

[19]  Geoffrey E. Hinton,et al.  Visualizing Data using t-SNE , 2008 .

[20]  Zhi-Quan Luo,et al.  Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems , 2014, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[21]  Ling Shao,et al.  Weakly-Supervised Cross-Domain Dictionary Learning for Visual Recognition , 2014, International Journal of Computer Vision.

[22]  Zhenan Sun,et al.  Aggregating Randomized Clustering-Promoting Invariant Projections for Domain Adaptation , 2019, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[23]  Dong Xu,et al.  Exploiting web images for event recognition in consumer videos: A multiple source domain adaptation approach , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[24]  Andrew Zisserman,et al.  Tabula rasa: Model transfer for object category detection , 2011, 2011 International Conference on Computer Vision.

[25]  Sethuraman Panchanathan,et al.  A Two-Stage Weighting Framework for Multi-Source Domain Adaptation , 2011, NIPS.

[26]  Xuelong Li,et al.  Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation , 2016, IEEE Transactions on Image Processing.

[27]  Michael I. Jordan,et al.  Learning Transferable Features with Deep Adaptation Networks , 2015, ICML.

[28]  Ivor W. Tsang,et al.  Visual Event Recognition in Videos by Learning from Web Data , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[29]  Ling Shao,et al.  Transfer Learning for Visual Categorization: A Survey , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[30]  Qiang Yang,et al.  A Survey on Transfer Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.

[31]  Ming Shao,et al.  Generalized Transfer Subspace Learning Through Low-Rank Constraint , 2014, International Journal of Computer Vision.

[32]  Jiawei Han,et al.  Knowledge transfer via multiple model local structure mapping , 2008, KDD.

[33]  Yoshua Bengio,et al.  How transferable are features in deep neural networks? , 2014, NIPS.

[34]  Allen Y. Yang,et al.  Robust Face Recognition via Sparse Representation , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[35]  Rong Yan,et al.  Cross-domain video concept detection using adaptive svms , 2007, ACM Multimedia.

[36]  Jing Zhang,et al.  Joint Geometrical and Statistical Alignment for Visual Domain Adaptation , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[37]  Jing Liu,et al.  Robust Structured Subspace Learning for Data Representation , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[38]  Sunita Sarawagi,et al.  Domain Adaptation of Conditional Probability Models Via Feature Subsetting , 2007, PKDD.

[39]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[40]  Dacheng Tao,et al.  Bregman Divergence-Based Regularization for Transfer Subspace Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.

[41]  John Blitzer,et al.  Co-Training for Domain Adaptation , 2011, NIPS.

[42]  Ivor W. Tsang,et al.  This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 1 Domain Adaptation from Multiple Sources: A Domain- , 2022 .

[43]  Mengjie Zhang,et al.  Scatter Component Analysis: A Unified Framework for Domain Adaptation and Domain Generalization , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[44]  Yuan Shi,et al.  Geodesic flow kernel for unsupervised domain adaptation , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[45]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[46]  Xiaofei He,et al.  Multi-Target Regression via Robust Low-Rank Learning , 2018, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[47]  Shiguang Shan,et al.  Domain Adaptation for Face Recognition: Targetize Source Domain Bridged by Common Subspace , 2013, International Journal of Computer Vision.