Private Deep Neural Network Models Publishing for Machine Learning as a Service

Machine learning as a service has emerged recently to relieve tensions between heavy deep learning tasks and increasing application demands. A deep learning service provider could help its clients to benefit from deep learning techniques at an affordable price instead of huge resource consumption. However, the service provider may have serious concerns about model privacy when a deep neural network model is published. Previous model publishing solutions mainly depend on additional artificial noise. By adding elaborated noises to parameters or gradients during the training phase, strong privacy guarantees like differential privacy could be achieved. However, this kind of approach cannot give guarantees on some other aspects, such as the quality of the disturbingly trained model and the convergence of the modified learning algorithm. In this paper, we propose an alternative private deep neural network model publishing solution, which caused no interference in the original training phase. We provide privacy, convergence and quality guarantees for the published model at the same time. Furthermore, our solution can achieve a smaller privacy budget when compared with artificial noise based training solutions proposed in previous works. Specifically, our solution gives an acceptable test accuracy with privacy budget ∊ = 1. Meanwhile, membership inference attack accuracy will be deceased from nearly 90% to around 60% across all classes.

[1]  Vitaly Shmatikov,et al.  Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[2]  Baoyuan Wu,et al.  Compressing Convolutional Neural Networks via Factorized Convolutional Filters , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  Dejing Dou,et al.  Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning , 2017, 2017 IEEE International Conference on Data Mining (ICDM).

[4]  Vitaly Shmatikov,et al.  Exploiting Unintended Feature Leakage in Collaborative Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).

[5]  Nikita Borisov,et al.  Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations , 2018, CCS.

[6]  Song Han,et al.  Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.

[7]  Calton Pu,et al.  Differentially Private Model Publishing for Deep Learning , 2019, 2019 IEEE Symposium on Security and Privacy (SP).

[8]  James Demmel,et al.  Scaling Deep Learning on GPU and Knights Landing clusters , 2017, SC17: International Conference for High Performance Computing, Networking, Storage and Analysis.

[9]  Hui Wu,et al.  Protecting Intellectual Property of Deep Neural Networks with Watermarking , 2018, AsiaCCS.

[10]  Sarvar Patel,et al.  Practical Secure Aggregation for Privacy-Preserving Machine Learning , 2017, IACR Cryptol. ePrint Arch..

[11]  Sampath Kannan,et al.  The Exponential Mechanism for Social Welfare: Private, Truthful, and Nearly Optimal , 2012, 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science.

[12]  Wei Zhang,et al.  Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent , 2017, NIPS.

[13]  Aaron Roth,et al.  The Algorithmic Foundations of Differential Privacy , 2014, Found. Trends Theor. Comput. Sci..

[14]  Giuseppe Ateniese,et al.  Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning , 2017, CCS.

[15]  Kunal Talwar,et al.  Mechanism Design via Differential Privacy , 2007, 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS'07).

[16]  Reza Shokri,et al.  Machine Learning with Membership Privacy using Adversarial Regularization , 2018, CCS.

[17]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[18]  Yann LeCun,et al.  Optimal Brain Damage , 1989, NIPS.

[19]  Robert Laganière,et al.  Membership Inference Attack against Differentially Private Deep Learning Model , 2018, Trans. Data Priv..

[20]  Cynthia Dwork,et al.  Calibrating Noise to Sensitivity in Private Data Analysis , 2006, TCC.

[21]  Zhenkai Liang,et al.  Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment , 2019, CCS.

[22]  Somesh Jha,et al.  Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.

[23]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[24]  M. C. Jones,et al.  A reliable data-based bandwidth selection method for kernel density estimation , 1991 .

[25]  Ian Goodfellow,et al.  Deep Learning with Differential Privacy , 2016, CCS.

[26]  Liwei Wang,et al.  The Expressive Power of Neural Networks: A View from the Width , 2017, NIPS.