Black-box Probe for Unsupervised Domain Adaptation without Model Transferring

In recent years, researchers have been paying increasing attention to the threats brought by deep learning models to data security and privacy, especially in the field of domain adaptation. Existing unsupervised domain adaptation (UDA) methods can achieve promising performance without transferring data from source domain to target domain. However, UDA with representation alignment or self-supervised pseudo-labeling relies on the transferred source models. In many data-critical scenarios, methods based on model transferring may suffer from membership inference attacks and expose private data. In this paper, we aim to overcome a challenging new setting where the source models are only queryable but cannot be transferred to the target domain. We propose Black-box Probe Domain Adaptation (BPDA), which adopts query mechanism to probe and refine information from source model using third-party dataset. In order to gain more informative query results, we further propose Distributionally Adversarial Training (DAT) to align the distribution of third-party data with that of target data. BPDA uses public third-party dataset and adversarial examples based on DAT as the information carrier between source and target domains, dispensing with transferring source data or model. Experimental results on benchmarks of Digit-Five, OfficeCaltech, Office-31, Office-Home, and DomainNet demonstrate the feasibility of BPDA without model transferring.

[1]  Kurt Keutzer,et al.  Multi-source Distilling Domain Adaptation , 2020, AAAI.

[2]  Vitaly Shmatikov,et al.  Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[3]  Yuan Shi,et al.  Geodesic flow kernel for unsupervised domain adaptation , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[4]  Changick Kim,et al.  Self-Training and Adversarial Background Regularization for Unsupervised Domain Adaptive One-Stage Object Detection , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[5]  Kui Jia,et al.  On Universal Black-Box Domain Adaptation , 2021, ArXiv.

[6]  Tieniu Tan,et al.  Exploring uncertainty in pseudo-label guided unsupervised domain adaptation , 2019, Pattern Recognit..

[7]  Minghao Chen,et al.  KD3A: Unsupervised Multi-Source Decentralized Domain Adaptation via Knowledge Distillation , 2021, ICML.

[8]  Trevor Darrell,et al.  Adversarial Discriminative Domain Adaptation , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[10]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Victor S. Lempitsky,et al.  Unsupervised Domain Adaptation by Backpropagation , 2014, ICML.

[12]  Sungeun Hong,et al.  Domain Adaptation Without Source Data , 2020, IEEE Transactions on Artificial Intelligence.

[13]  François Laviolette,et al.  Domain-Adversarial Training of Neural Networks , 2015, J. Mach. Learn. Res..

[14]  Wenqi Wei,et al.  Demystifying Membership Inference Attacks in Machine Learning as a Service , 2019, IEEE Transactions on Services Computing.

[15]  Fan Zhang,et al.  Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.

[16]  Cheng Deng,et al.  Deep Clustering via Joint Convolutional Autoencoder Embedding and Relative Entropy Minimization , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[17]  Bo Wang,et al.  Moment Matching for Multi-Source Domain Adaptation , 2018, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[18]  Kate Saenko,et al.  Federated Adversarial Domain Adaptation , 2020, ICLR.

[19]  Lei Zhang,et al.  Unsupervised Domain Adaptation of Black-Box Source Models , 2021, BMVC.

[20]  Hau-San Wong,et al.  Model Adaptation: Unsupervised Domain Adaptation Without Source Data , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[21]  Jiashi Feng,et al.  Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation , 2020, ICML.

[22]  Tassilo Klein,et al.  Differentially Private Federated Learning: A Client Level Perspective , 2017, ArXiv.

[23]  Tianjian Chen,et al.  Federated Machine Learning: Concept and Applications , 2019 .

[24]  Dan Boneh,et al.  Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.

[25]  Zhenan Sun,et al.  Aggregating Randomized Clustering-Promoting Invariant Projections for Domain Adaptation , 2019, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[26]  Ran He,et al.  Distill and Fine-tune: Effective Adaptation from a Black-box Source Model , 2021, ArXiv.

[27]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[28]  Michael I. Jordan,et al.  Learning Transferable Features with Deep Adaptation Networks , 2015, ICML.