Deep Nonlinear Feature Coding for Unsupervised Domain Adaptation

Deep feature learning has recently emerged with demonstrated effectiveness in domain adaptation. In this paper, we propose a Deep Nonlinear Feature Coding framework (DNFC) for unsupervised domain adaptation. DNFC builds on the marginalized stacked denoising autoencoder (mSDA) to extract rich deep features. We introduce two new elements to mSDA: domain divergence minimization by Maximum Mean Discrepancy (MMD), and nonlinear coding by kernelization. These two elements are essential for domain adaptation as they ensure the extracted deep features to have a small distribution discrepancy and encode data nonlinearity. The effectiveness of DNFC is verified by extensive experiments on benchmark datasets. Specifically, DNFC attains much higher prediction accuracy than state-of-the-art domain adaptation methods. Compared to its basis mSDA, DNFC is able to achieve remarkable prediction improvement and meanwhile converges much faster with a small number of stacked layers.

[1]  Erik Marchi,et al.  Sparse Autoencoder-Based Feature Transfer Learning for Speech Emotion Recognition , 2013, 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction.

[2]  Xiaofei He,et al.  Locality Preserving Projections , 2003, NIPS.

[3]  Yoshua Bengio,et al.  Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach , 2011, ICML.

[4]  Anna Margolis,et al.  A Literature Review of Domain Adaptation with Unlabeled Data , 2011 .

[5]  Ming Shao,et al.  Deep Low-Rank Coding for Transfer Learning , 2015, IJCAI.

[6]  Michael I. Jordan,et al.  Advances in Neural Information Processing Systems 30 , 1995 .

[7]  Dima Damen,et al.  Recognizing linked events: Searching the space of feasible explanations , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[8]  Luís A. Alexandre,et al.  Improving transfer learning accuracy by reusing Stacked Denoising Autoencoders , 2014, 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC).

[9]  Philip S. Yu,et al.  Transfer Feature Learning with Joint Distribution Adaptation , 2013, 2013 IEEE International Conference on Computer Vision.

[10]  Ivor W. Tsang,et al.  Hybrid Heterogeneous Transfer Learning through Deep Learning , 2014, AAAI.

[11]  Fanjiang Xu,et al.  Transfer Feature Representation via Multiple Kernel Learning , 2015, AAAI.

[12]  Koby Crammer,et al.  Analysis of Representations for Domain Adaptation , 2006, NIPS.

[13]  Philip S. Yu,et al.  Transfer Joint Matching for Unsupervised Domain Adaptation , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[14]  P. Cochat,et al.  Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.

[15]  Tinne Tuytelaars,et al.  Unsupervised Visual Domain Adaptation Using Subspace Alignment , 2013, 2013 IEEE International Conference on Computer Vision.

[16]  Ivor W. Tsang,et al.  Domain Adaptation via Transfer Component Analysis , 2009, IEEE Transactions on Neural Networks.

[17]  Kilian Q. Weinberger,et al.  Marginalized Denoising Autoencoders for Domain Adaptation , 2012, ICML.

[18]  John Blitzer,et al.  Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification , 2007, ACL.

[19]  Qiang Yang,et al.  A Survey on Transfer Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.

[20]  Fuzhen Zhuang,et al.  Supervised Representation Learning: Transfer Learning with Deep Autoencoders , 2015, IJCAI.

[21]  James E. Morrow The University of Washington , 2004 .

[22]  Yuan Shi,et al.  Geodesic flow kernel for unsupervised domain adaptation , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[23]  Qiang Yang,et al.  Co-clustering based classification for out-of-domain documents , 2007, KDD '07.

[24]  Bernhard Schölkopf,et al.  A Kernel Method for the Two-Sample-Problem , 2006, NIPS.