Optimizing Privacy-Preserving Outsourced Convolutional Neural Network Predictions

Convolutional neural network is a machine-learning model widely applied in various prediction tasks, such as computer vision and medical image analysis. Their great predictive power requires extensive computation, which encourages model owners to host the prediction service in a cloud platform. Recent researches focus on the privacy of the query and results, but they do not provide model privacy against the model-hosting server and may leak partial information about the results. Some of them further require frequent interactions with the querier or heavy computation overheads, which discourages querier from using the prediction service. This paper proposes a new scheme for privacy-preserving neural network prediction in the outsourced setting, i.e., the server cannot learn the query, (intermediate) results, and the model. Similar to SecureML (S&P'17), a representative work that provides model privacy, we leverage two non-colluding servers with secret sharing and triplet generation to minimize the usage of heavyweight cryptography. Further, we adopt asynchronous computation to improve the throughput, and design garbled circuits for the non-polynomial activation function to keep the same accuracy as the underlying network (instead of approximating it). Our experiments on MNIST dataset show that our scheme achieves an average of 122x, 14.63x, and 36.69x reduction in latency compared to SecureML, MiniONN (CCS'17), and EzPC (EuroS&P'19), respectively. For the communication costs, our scheme outperforms SecureML by 1.09x, MiniONN by 36.69x, and EzPC by 31.32x on average. On the CIFAR dataset, our scheme achieves a lower latency by a factor of 7.14x and 3.48x compared to MiniONN and EzPC, respectively. Our scheme also provides 13.88x and 77.46x lower communication costs than MiniONN and EzPC on the CIFAR dataset.

[1]  Sebastian Nowozin,et al.  Oblivious Multi-Party Machine Learning on Trusted Processors , 2016, USENIX Security Symposium.

[2]  Anantha Chandrakasan,et al.  Gazelle: A Low Latency Framework for Secure Neural Network Inference , 2018, IACR Cryptol. ePrint Arch..

[3]  Michael Naehrig,et al.  Improved Security for a Ring-Based Fully Homomorphic Encryption Scheme , 2013, IMACC.

[4]  Ian Goodfellow,et al.  Deep Learning with Differential Privacy , 2016, CCS.

[5]  Qian Wang,et al.  Deep Learning-Based Gait Recognition Using Smartphones in the Wild , 2018, IEEE Transactions on Information Forensics and Security.

[6]  Yao Lu,et al.  Oblivious Neural Network Predictions via MiniONN Transformations , 2017, IACR Cryptol. ePrint Arch..

[7]  Vitaly Shmatikov,et al.  Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[8]  Sherman S. M. Chow,et al.  Goten: GPU-Outsourcing Trusted Execution of Neural Network Training , 2019, AAAI.

[9]  Fan Zhang,et al.  Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.

[10]  Yann LeCun,et al.  The mnist database of handwritten digits , 2005 .

[11]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[12]  Peter Rindal,et al.  ABY3: A Mixed Protocol Framework for Machine Learning , 2018, IACR Cryptol. ePrint Arch..

[13]  Sherman S. M. Chow,et al.  Blindfolded Evaluation of Random Forests with Multi-Key Homomorphic Encryption , 2019, IACR Cryptol. ePrint Arch..

[14]  Jinyuan Jia,et al.  AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning , 2018, USENIX Security Symposium.

[15]  Aseem Rastogi,et al.  EzPC: Programmable, Efficient, and Scalable Secure Two-Party Computation , 2018, IACR Cryptol. ePrint Arch..

[16]  Baochun Li,et al.  Differentially-Private Deep Learning from an optimization Perspective , 2019, IEEE INFOCOM 2019 - IEEE Conference on Computer Communications.

[17]  Adam D. Smith,et al.  Is Interaction Necessary for Distributed Private Learning? , 2017, 2017 IEEE Symposium on Security and Privacy (SP).

[18]  Julian Jang,et al.  Towards privacy-preserving classification in neural networks , 2016, 2016 14th Annual Conference on Privacy, Security and Trust (PST).

[19]  Farinaz Koushanfar,et al.  DeepSecure: Scalable Provably-Secure Deep Learning , 2017, 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC).

[20]  Yanjiao Chen,et al.  Privacy-Preserving Collaborative Deep Learning With Unreliable Participants , 2020, IEEE Transactions on Information Forensics and Security.

[21]  Lakshminarayanan Subramanian,et al.  Two-Party Computation Model for Privacy-Preserving Queries over Distributed Databases , 2009, NDSS.

[22]  Donald Beaver,et al.  Efficient Multiparty Protocols Using Circuit Randomization , 1991, CRYPTO.

[23]  Qian Wang,et al.  Towards Private and Scalable Cross-Media Retrieval , 2021, IEEE Transactions on Dependable and Secure Computing.

[24]  Benny Pinkas,et al.  Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring , 2018, USENIX Security Symposium.

[25]  Vitaly Shmatikov,et al.  Chiron: Privacy-preserving Machine Learning as a Service , 2018, ArXiv.

[26]  Sherman S. M. Chow,et al.  Privacy-Preserving Machine Learning , 2022, SpringerBriefs on Cyber Security Systems and Networks.

[27]  Constance Morel,et al.  Privacy-Preserving Classification on Deep Neural Network , 2017, IACR Cryptol. ePrint Arch..

[28]  Frederik Vercauteren,et al.  Fully homomorphic SIMD operations , 2012, Designs, Codes and Cryptography.

[29]  Long Chen,et al.  Robust Lane Detection From Continuous Driving Scenes Using Deep Neural Networks , 2019, IEEE Transactions on Vehicular Technology.

[30]  Ananthram Swami,et al.  Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.

[31]  Shafi Goldwasser,et al.  Machine Learning Classification over Encrypted Data , 2015, NDSS.

[32]  Xiaoqian Jiang,et al.  Secure Outsourced Matrix Computation and Application to Neural Networks , 2018, CCS.

[33]  Michael Zohner,et al.  ABY - A Framework for Efficient Mixed-Protocol Secure Two-Party Computation , 2015, NDSS.

[34]  Jascha Sohl-Dickstein,et al.  Adversarial Examples that Fool both Computer Vision and Time-Limited Humans , 2018, NeurIPS.

[35]  Michael Naehrig,et al.  Manual for Using Homomorphic Encryption for Bioinformatics , 2017, Proceedings of the IEEE.

[36]  A. Yao,et al.  Fair exchange with a semi-trusted third party (extended abstract) , 1997, CCS '97.

[37]  Sherman S. M. Chow,et al.  Privacy-Preserving Decision Trees Evaluation via Linear Functions , 2017, ESORICS.

[38]  Dan Boneh,et al.  Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware , 2018, ICLR.

[39]  Michael Naehrig,et al.  CryptoNets: applying neural networks to encrypted data with high throughput and accuracy , 2016, ICML 2016.

[40]  Adi Shamir,et al.  How to share a secret , 1979, CACM.

[41]  Farinaz Koushanfar,et al.  XONN: XNOR-based Oblivious Deep Neural Network Inference , 2019, IACR Cryptol. ePrint Arch..

[42]  Craig Gentry,et al.  Fully homomorphic encryption using ideal lattices , 2009, STOC '09.

[43]  Qian Wang,et al.  DeepCrack: Learning Hierarchical Convolutional Features for Crack Detection , 2019, IEEE Transactions on Image Processing.

[44]  Sherman S.M. Chow Can We Securely Outsource Big Data Analytics with Lightweight Cryptography? , 2019, SCC '19.

[45]  Ming Li,et al.  A tale of two clouds: Computing on data encrypted under multiple keys , 2014, 2014 IEEE Conference on Communications and Network Security.

[46]  Mario Fritz,et al.  ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models , 2018, NDSS.

[47]  Emiliano De Cristofaro,et al.  LOGAN: Membership Inference Attacks Against Generative Models , 2017, Proc. Priv. Enhancing Technol..

[48]  Payman Mohassel,et al.  SecureML: A System for Scalable Privacy-Preserving Machine Learning , 2017, 2017 IEEE Symposium on Security and Privacy (SP).

[49]  Hassan Takabi,et al.  Privacy-preserving Machine Learning as a Service , 2018, Proc. Priv. Enhancing Technol..

[50]  Yanjiao Chen,et al.  Privacy-Preserving Collaborative Model Learning: The Case of Word Vector Training , 2018, IEEE Transactions on Knowledge and Data Engineering.

[51]  Qian Wang,et al.  Invisible Adversarial Attack against Deep Neural Networks: An Adaptive Penalization Approach , 2019, IEEE Transactions on Dependable and Secure Computing.