Information Laundering for Model Privacy

In this work, we propose information laundering, a novel framework for enhancing model privacy. Unlike data privacy that concerns the protection of raw data information, model privacy aims to protect an already-learned model that is to be deployed for public use. The private model can be obtained from general learning methods, and its deployment means that it will return a deterministic or random response for a given input query. An information-laundered model consists of probabilistic components that deliberately maneuver the intended input and output for queries to the model, so the model's adversarial acquisition is less likely. Under the proposed framework, we develop an information-theoretic principle to quantify the fundamental tradeoffs between model utility and privacy leakage and derive the optimal design.

[1]  Martin J. Wainwright,et al.  Minimax Optimal Procedures for Locally Private Estimation , 2016, ArXiv.

[2]  Cynthia Dwork,et al.  Privacy-Preserving Datamining on Vertically Partitioned Databases , 2004, CRYPTO.

[3]  Andrea Cavallaro,et al.  Privacy as a Feature for Body-Worn Cameras [In the Spotlight] , 2020, IEEE Signal Process. Mag..

[4]  Lior Rokach,et al.  Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers , 2017, RAID.

[5]  Paul Voigt,et al.  The EU General Data Protection Regulation (GDPR) , 2017 .

[6]  Ananthram Swami,et al.  Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.

[7]  Xun Xian,et al.  Assisted Learning and Imitation Privacy , 2020, ArXiv.

[8]  Patrick D. McDaniel,et al.  Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.

[9]  Arun Ross,et al.  Biometrics Security and Privacy Protection [From the Guest Editors] , 2015, IEEE Signal Process. Mag..

[10]  Miriam A. M. Capretz,et al.  MLaaS: Machine Learning as a Service , 2015, 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA).

[11]  Lior Rokach,et al.  Generic Black-Box End-to-End Attack against RNNs and Other API Calls Based Malware Classifiers , 2017, ArXiv.

[12]  Nina Narodytska,et al.  Simple Black-Box Adversarial Attacks on Deep Neural Networks , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[13]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[14]  Alexandre V. Evfimievski,et al.  Limiting privacy breaches in privacy preserving data mining , 2003, PODS.

[15]  Sofya Raskhodnikova,et al.  What Can We Learn Privately? , 2008, 2008 49th Annual IEEE Symposium on Foundations of Computer Science.

[16]  Cynthia Dwork,et al.  Differential Privacy , 2006, ICALP.

[17]  Jie Ding,et al.  Assisted Learning: A Framework for Multiple Organizations' Learning , 2020 .

[18]  Naftali Tishby,et al.  The information bottleneck method , 2000, ArXiv.

[19]  Fan Zhang,et al.  Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.

[20]  Din J. Wasem,et al.  Mining of Massive Datasets , 2014 .

[21]  David Chaum,et al.  Multiparty unconditionally secure protocols , 1988, STOC '88.

[22]  Andrew Chi-Chih Yao,et al.  Protocols for secure computations , 1982, FOCS 1982.

[23]  Xin He,et al.  Towards Information Privacy for the Internet of Things , 2016, ArXiv.

[24]  Naftali Tishby,et al.  Deep learning and the information bottleneck principle , 2015, 2015 IEEE Information Theory Workshop (ITW).

[25]  Cynthia Dwork,et al.  Calibrating Noise to Sensitivity in Private Data Analysis , 2006, TCC.

[26]  Flávio du Pin Calmon,et al.  Privacy against statistical inference , 2012, 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[27]  Debdeep Mukhopadhyay,et al.  Adversarial Attacks and Defences: A Survey , 2018, ArXiv.

[28]  Mohssen Alabbadi Mobile Learning (mLearning) Based on Cloud Computing: mLearning as a Service (mLaaS) , 2011 .