Providing Domain Specific Model via Universal No Data Exchange Domain Adaptation

High quality Data and its derivative machine learning (ML) models are gradually becoming commercial commodity and provided to work effectively in an increasing number of areas. These ML model providers possess a set of trained models along with immense amount of source data stored on their servers. In order to obtain specific models, providers will require consumers to upload their domain-specialized data and conduct domain adaptation at the server side. However, considering the protection of the private information reflected by consumers’ training data, as well as maintaining the commercial competitiveness of ML service, it is best that there is no data exchange between servers and consumers. Besides, consumers’ data is always lack of supervision, i.e., classification labels, thus we are searching for how to conduct unsupervised domain adaptation (UDA) with no data exchange among domains in this work. We are the first to propose a novel memory cache based adversarial training (AT) strategy for UDA at the target side without the source data (the existence of source data is an essential requirement for regular AT). And our method includes a multiple pseudo labelling operation which is more accurate and robust than single pseudo labelling. The AT and multiple labelling work collaboratively to extract shared features among domains and adapt the learning model more specific to target domain. We carry out extensive evaluation experiments with a number of data sets and baselines, and according to the results, our proposed method can perform very well and exceed the state-of-art performance on all tasks. In the end, we also discuss how to extend our method to partial and open-set domain adaptation.

[1]  Sethuraman Panchanathan,et al.  Deep Hashing Network for Unsupervised Domain Adaptation , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Dawn Xiaodong Song,et al.  Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.

[3]  Qingming Huang,et al.  Towards Discriminability and Diversity: Batch Nuclear-Norm Maximization Under Label Insufficient Situations , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Virendra J. Marathe,et al.  Private Federated Learning with Domain Adaptation , 2019, ArXiv.

[5]  Geoffrey E. Hinton,et al.  When Does Label Smoothing Help? , 2019, NeurIPS.

[6]  Mingsheng Long,et al.  Minimum Class Confusion for Versatile Domain Adaptation , 2019, ECCV.

[7]  Vitaly Shmatikov,et al.  Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[8]  Kate Saenko,et al.  Federated Adversarial Domain Adaptation , 2020, ICLR.

[9]  Kui Jia,et al.  Discriminative Adversarial Domain Adaptation , 2019, AAAI.

[10]  Trevor Darrell,et al.  Deep Domain Confusion: Maximizing for Domain Invariance , 2014, CVPR 2014.

[11]  Tatsuya Harada,et al.  Open Set Domain Adaptation by Backpropagation , 2018, ECCV.

[12]  R. Venkatesh Babu,et al.  Towards Inheritable Models for Open-Set Domain Adaptation , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[13]  Trevor Darrell,et al.  Adapting Visual Category Models to New Domains , 2010, ECCV.

[14]  Tatsuya Harada,et al.  Maximum Classifier Discrepancy for Unsupervised Domain Adaptation , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[15]  Michael I. Jordan,et al.  Conditional Adversarial Domain Adaptation , 2017, NeurIPS.

[16]  Jun Zhu,et al.  Cluster Alignment With a Teacher for Unsupervised Domain Adaptation , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[17]  Victor S. Lempitsky,et al.  Unsupervised Domain Adaptation by Backpropagation , 2014, ICML.

[18]  Xinyu Jin,et al.  Selective Transfer With Reinforced Transfer Network for Partial Domain Adaptation , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[19]  Liang Lin,et al.  Larger Norm More Transferable: An Adaptive Feature Norm Approach for Unsupervised Domain Adaptation , 2018, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[20]  Michael I. Jordan,et al.  Transferable Normalization: Towards Improving Transferability of Deep Neural Networks , 2019, NeurIPS.

[21]  Ming-Yu Liu,et al.  Coupled Generative Adversarial Networks , 2016, NIPS.

[22]  Xiao Wang,et al.  Eavesdrop the Composition Proportion of Training Labels in Federated Learning , 2019, ArXiv.

[23]  D. Massart,et al.  The Mahalanobis distance , 2000 .

[24]  Qingming Huang,et al.  Gradually Vanishing Bridge for Adversarial Domain Adaptation , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[25]  Ran He,et al.  Source Data-Absent Unsupervised Domain Adaptation Through Hypothesis Transfer and Labeling Transfer , 2020, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[26]  François Fleuret,et al.  Knowledge Transfer with Jacobian Matching , 2018, ICML.

[27]  Jiashi Feng,et al.  Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation , 2020, ICML.

[28]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[29]  R. Venkatesh Babu,et al.  Universal Source-Free Domain Adaptation , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[30]  Jianmin Wang,et al.  Transferability vs. Discriminability: Batch Spectral Penalization for Adversarial Domain Adaptation , 2019, ICML.

[31]  Yunbo Wang,et al.  A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation , 2020, ECCV.

[32]  Michael I. Jordan,et al.  Learning Transferable Features with Deep Adaptation Networks , 2015, ICML.

[33]  Edwin Lughofer,et al.  Central Moment Discrepancy (CMD) for Domain-Invariant Representation Learning , 2017, ICLR.

[34]  Yuchen Zhang,et al.  Bridging Theory and Algorithm for Domain Adaptation , 2019, ICML.

[35]  Michael I. Jordan,et al.  Unsupervised Domain Adaptation with Residual Transfer Networks , 2016, NIPS.