Unsupervised Domain Adaptation with Robust Deep Logistic Regression

The goal of unsupervised domain adaptation (UDA) is to eliminate the cross-domain discrepancy in probability distributions without the availability of labeled target samples during training. Even recent studies have revealed the benefit of deep convolutional features trained on a large set (e.g., ImageNet) in alleviating domain discrepancy. The transferability of features decreases as (i) the difference between the source and target domains increases, or (ii) the layers are toward the top layer. Therefore, even with deep features, domain adaptation remains necessary. In this paper, we treat UDA as a special case of semi-supervised learning, where the source samples are labeled while the target samples are unlabeled. Conventional semi-supervised learning methods, however, usually attain poor performance for UDA. Due to domain discrepancy, label noise generally is inevitable when using the classifiers trained on source classifier to predict target samples. Thus we deploy a robust deep logistic regression loss on the target samples, resulting in our RDLR model. In such a way, pseudo-labels are gradually assigned to unlabeled target samples according to their maximum classification scores during training. Extensive experiments show that our method yields the state-of-the-art results, demonstrating the effectiveness of robust logistic regression classifiers in UDA.

[1]  Laurens van der Maaten,et al.  Accelerating t-SNE using tree-based algorithms , 2014, J. Mach. Learn. Res..

[2]  Yoshua Bengio,et al.  How transferable are features in deep neural networks? , 2014, NIPS.

[3]  Massih-Reza Amini,et al.  The use of unlabeled data to improve supervised learning for text summarization , 2002, SIGIR '02.

[4]  Qiang Yang,et al.  A Survey on Transfer Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.

[5]  Ivor W. Tsang,et al.  Domain Adaptation via Transfer Component Analysis , 2009, IEEE Transactions on Neural Networks.

[6]  Yuan Shi,et al.  Geodesic flow kernel for unsupervised domain adaptation , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[7]  Trevor Darrell,et al.  Adapting Visual Category Models to New Domains , 2010, ECCV.

[8]  G. Griffin,et al.  Caltech-256 Object Category Dataset , 2007 .

[9]  Alexei A. Efros,et al.  Unbiased look at dataset bias , 2011, CVPR 2011.

[10]  Sumit Chopra,et al.  DLID: Deep Learning for Domain Adaptation by Interpolating between Domains , 2013 .

[11]  Trevor Darrell,et al.  DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition , 2013, ICML.

[12]  Mengjie Zhang,et al.  Domain Adaptive Neural Networks for Object Recognition , 2014, PRICAI.

[13]  Trevor Darrell,et al.  Deep Domain Confusion: Maximizing for Domain Invariance , 2014, CVPR 2014.

[14]  Trevor Darrell,et al.  Simultaneous Deep Transfer Across Domains and Tasks , 2015, ICCV.

[15]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[16]  Tinne Tuytelaars,et al.  Joint cross-domain classification and subspace learning for unsupervised adaptation , 2014, Pattern Recognit. Lett..

[17]  Yoshua Bengio,et al.  Bias learning, knowledge sharing , 2003, IEEE Trans. Neural Networks.

[18]  Daumé,et al.  Frustratingly Easy Semi-Supervised Domain Adaptation , 2010 .

[19]  Tinne Tuytelaars,et al.  Unsupervised Visual Domain Adaptation Using Subspace Alignment , 2013, 2013 IEEE International Conference on Computer Vision.

[20]  Mehrtash Tafazzoli Harandi,et al.  Distribution-Matching Embedding for Visual Domain Adaptation , 2016, J. Mach. Learn. Res..

[21]  Rama Chellappa,et al.  DASH-N: Joint Hierarchical Domain Adaptation and Feature Learning , 2015, IEEE Transactions on Image Processing.

[22]  H. Shimodaira,et al.  Improving predictive inference under covariate shift by weighting the log-likelihood function , 2000 .