Information Laundering for Model Privacy

In this work, we propose information laundering, a novel framework for enhancing model privacy. Unlike data privacy that concerns the protection of raw data information, model privacy aims to protect an already-learned model that is to be deployed for public use. The private model can be obtained from general learning methods, and its deployment means that it will return a deterministic or random response for a given input query. An information-laundered model consists of probabilistic components that deliberately maneuver the intended input and output for queries of the model, so the model's adversarial acquisition is less likely. Under the proposed framework, we develop an information-theoretic principle to quantify the fundamental tradeoffs between model utility and privacy leakage and derive the optimal design.

[1]  Mohssen Alabbadi Mobile Learning (mLearning) Based on Cloud Computing: mLearning as a Service (mLaaS) , 2011 .

[2]  Andrea Cavallaro,et al.  Privacy as a Feature for Body-Worn Cameras [In the Spotlight] , 2020, IEEE Signal Process. Mag..

[3]  Martin J. Wainwright,et al.  Minimax Optimal Procedures for Locally Private Estimation , 2016, ArXiv.

[4]  Alexandre V. Evfimievski,et al.  Limiting privacy breaches in privacy preserving data mining , 2003, PODS.

[5]  Samuel Marchal,et al.  PRADA: Protecting Against DNN Model Stealing Attacks , 2018, 2019 IEEE European Symposium on Security and Privacy (EuroS&P).

[6]  Vijay Arya,et al.  Model Extraction Warning in MLaaS Paradigm , 2017, ACSAC.

[7]  Peter Richtárik,et al.  Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.

[8]  Benjamin Edwards,et al.  Defending Against Machine Learning Model Stealing Attacks Using Deceptive Perturbations , 2018 .

[9]  Tribhuvanesh Orekondy,et al.  Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks , 2020, ICLR.

[10]  Patrick D. McDaniel,et al.  Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.

[11]  Ananthram Swami,et al.  Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples , 2016, ArXiv.

[12]  Ananthram Swami,et al.  Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.

[13]  Lior Rokach,et al.  Generic Black-Box End-to-End Attack against RNNs and Other API Calls Based Malware Classifiers , 2017, ArXiv.

[14]  David Chaum,et al.  Multiparty unconditionally secure protocols , 1988, STOC '88.

[15]  Flávio du Pin Calmon,et al.  Privacy against statistical inference , 2012, 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[16]  Vitaly Shmatikov,et al.  Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[17]  Naftali Tishby,et al.  The information bottleneck method , 2000, ArXiv.

[18]  Debdeep Mukhopadhyay,et al.  Adversarial Attacks and Defences: A Survey , 2018, ArXiv.

[19]  Aditya Shukla Model Extraction and Active Learning , 2020 .

[20]  Konrad Jacobs Elements of Information Theory , 1992 .

[21]  Jie Ding,et al.  HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients , 2020, ICLR.

[22]  Cynthia Dwork,et al.  Privacy-Preserving Datamining on Vertically Partitioned Databases , 2004, CRYPTO.

[23]  Cynthia Dwork,et al.  Calibrating Noise to Sensitivity in Private Data Analysis , 2016, J. Priv. Confidentiality.

[24]  Sofya Raskhodnikova,et al.  What Can We Learn Privately? , 2008, FOCS.

[25]  Anca D. Dragan,et al.  Model Reconstruction from Model Explanations , 2018, FAT.

[26]  Nina Narodytska,et al.  Simple Black-Box Adversarial Attacks on Deep Neural Networks , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[27]  Din J. Wasem,et al.  Mining of Massive Datasets , 2014 .

[28]  Miriam A. M. Capretz,et al.  MLaaS: Machine Learning as a Service , 2015, 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA).

[29]  Cynthia Dwork,et al.  Differential Privacy , 2006, ICALP.

[30]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[31]  Vitaly Shmatikov,et al.  Privacy-preserving deep learning , 2015, 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton).