Secure and Verifiable Inference in Deep Neural Networks

Outsourced inference service has enormously promoted the popularity of deep learning, and helped users to customize a range of personalized applications. However, it also entails a variety of security and privacy issues brought by untrusted service providers. Particularly, a malicious adversary may violate user privacy during the inference process, or worse, return incorrect results to the client through compromising the integrity of the outsourced model. To address these problems, we propose SecureDL to protect the model’s integrity and user’s privacy in Deep Neural Networks (DNNs) inference process. In SecureDL, we first transform complicated non-linear activation functions of DNNs to low-degree polynomials. Then, we give a novel method to generate sensitive-samples, which can verify the integrity of a model’s parameters outsourced to the server with high accuracy. Finally, We exploit Leveled Homomorphic Encryption (LHE) to achieve the privacy-preserving inference. We shown that our sensitive-samples are indeed very sensitive to model changes, such that even a small change in parameters can be reflected in the model outputs. Based on the experiments conducted on real data and different types of attacks, we demonstrate the superior performance of SecureDL in terms of detection accuracy, inference accuracy, computation, and communication overheads.

[1]  Jeffrey F. Naughton,et al.  Bolt-on Differential Privacy for Scalable Stochastic Gradient Descent-based Analytics , 2016, SIGMOD Conference.

[2]  Tao Liu,et al.  Security analysis and enhancement of model compressed deep learning systems under adversarial attacks , 2018, 2018 23rd Asia and South Pacific Design Automation Conference (ASP-DAC).

[3]  Zahra Ghodsi,et al.  SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud , 2017, NIPS.

[4]  Yuyun Liao,et al.  A high-performance and low-power 32-bit multiply-accumulate unit with single-instruction-multiple-data (SIMD) feature , 2002, IEEE J. Solid State Circuits.

[5]  Ananthram Swami,et al.  Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).

[6]  Farinaz Koushanfar,et al.  Chameleon: A Hybrid Secure Computation Framework for Machine Learning Applications , 2018, IACR Cryptol. ePrint Arch..

[7]  Craig Gentry,et al.  (Leveled) fully homomorphic encryption without bootstrapping , 2012, ITCS '12.

[8]  Farinaz Koushanfar,et al.  DeepSecure: Scalable Provably-Secure Deep Learning , 2017, 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC).

[9]  Tudor Dumitras,et al.  Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks , 2018, NeurIPS.

[10]  Wen-Chuan Lee,et al.  Trojaning Attack on Neural Networks , 2018, NDSS.

[11]  Sanjiv Kumar,et al.  cpSGD: Communication-efficient and differentially-private distributed SGD , 2018, NeurIPS.

[12]  Ankur Srivastava,et al.  Neural Trojans , 2017, 2017 IEEE International Conference on Computer Design (ICCD).

[13]  Dan Boneh,et al.  Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware , 2018, ICLR.

[14]  Martin Zinkevich,et al.  Online Convex Programming and Generalized Infinitesimal Gradient Ascent , 2003, ICML.

[15]  Pascal Paillier,et al.  Fast Homomorphic Evaluation of Deep Discretized Neural Networks , 2018, IACR Cryptol. ePrint Arch..

[16]  Dejing Dou,et al.  Differential Privacy Preservation for Deep Auto-Encoders: an Application of Human Behavior Prediction , 2016, AAAI.

[17]  Michael Naehrig,et al.  CryptoNets: applying neural networks to encrypted data with high throughput and accuracy , 2016, ICML 2016.

[18]  Patrick Pérez,et al.  MoFA: Model-Based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[19]  Jean-Paul Calvi,et al.  Uniform approximation by discrete least squares polynomials , 2008, J. Approx. Theory.

[20]  Sarvar Patel,et al.  Practical Secure Aggregation for Privacy-Preserving Machine Learning , 2017, IACR Cryptol. ePrint Arch..

[21]  Tal Malkin,et al.  Garbled Neural Networks are Practical , 2019, IACR Cryptol. ePrint Arch..

[22]  Vitaly Shmatikov,et al.  Chiron: Privacy-preserving Machine Learning as a Service , 2018, ArXiv.

[23]  Craig Gentry,et al.  Ring Switching in BGV-Style Homomorphic Encryption , 2012, SCN.

[24]  Jiming Chen,et al.  CALM: Consistent Adaptive Local Marginal for Marginal Release under Local Differential Privacy , 2018, CCS.

[25]  Calton Pu,et al.  Differentially Private Model Publishing for Deep Learning , 2019, 2019 IEEE Symposium on Security and Privacy (SP).

[26]  Yves Kamp,et al.  A Frobenius norm approach to glottal closure detection from the speech signal , 1994, IEEE Trans. Speech Audio Process..

[27]  Rongxing Lu,et al.  Practical and Privacy-Aware Truth Discovery in Mobile Crowd Sensing Systems , 2018, CCS.

[28]  Honggang Wang,et al.  Integer-Ordered Simulation Optimization using R-SPLINE: Retrospective Search with Piecewise-Linear Interpolation and Neighborhood Enumeration , 2013, TOMC.

[29]  Peter Rindal,et al.  ABY3: A Mixed Protocol Framework for Machine Learning , 2018, IACR Cryptol. ePrint Arch..

[30]  Giuseppe Ateniese,et al.  Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning , 2017, CCS.

[31]  Di Wang,et al.  Differentially Private Empirical Risk Minimization Revisited: Faster and More General , 2018, NIPS.

[32]  Lei Wang,et al.  Secret Sharing based Secure Regressions with Applications , 2020, ArXiv.

[33]  Percy Liang,et al.  Certified Defenses for Data Poisoning Attacks , 2017, NIPS.

[34]  Rida T. Farouki,et al.  The Bernstein polynomial basis: A centennial retrospective , 2012, Comput. Aided Geom. Des..

[35]  Anantha Chandrakasan,et al.  Gazelle: A Low Latency Framework for Secure Neural Network Inference , 2018, IACR Cryptol. ePrint Arch..

[36]  Robert H. Deng,et al.  Data Security Issues in Deep Learning: Attacks, Countermeasures, and Opportunities , 2019, IEEE Communications Magazine.

[37]  Shiho Moriai,et al.  Privacy-Preserving Deep Learning via Additively Homomorphic Encryption , 2018, IEEE Transactions on Information Forensics and Security.

[38]  Mi Wen,et al.  Efficient and Privacy-Preserving Truth Discovery in Mobile Crowd Sensing Systems , 2019, IEEE Transactions on Vehicular Technology.

[39]  Yao Lu,et al.  Oblivious Neural Network Predictions via MiniONN Transformations , 2017, IACR Cryptol. ePrint Arch..

[40]  Raymond T. Ng,et al.  Indexing spatio-temporal trajectories with Chebyshev polynomials , 2004, SIGMOD '04.

[41]  Keiichi Iwamura,et al.  Secure and Efficient Outsourcing of Matrix Multiplication based on Secret Sharing Scheme using only One Server , 2020, 2020 IEEE 17th Annual Consumer Communications & Networking Conference (CCNC).

[42]  Kan Yang,et al.  VerifyNet: Secure and Verifiable Federated Learning , 2020, IEEE Transactions on Information Forensics and Security.

[43]  A. D. Gadjiev,et al.  Some approximation theorems via statistical convergence , 2002 .

[44]  Yun Zhang,et al.  Privacy-Preserving Federated Deep Learning With Irregular Users , 2022, IEEE Transactions on Dependable and Secure Computing.

[45]  Payman Mohassel,et al.  SecureML: A System for Scalable Privacy-Preserving Machine Learning , 2017, 2017 IEEE Symposium on Security and Privacy (SP).

[46]  Shweta Shinde,et al.  Privado: Practical and Secure DNN Inference , 2018, ArXiv.

[47]  Hassan Takabi,et al.  Privacy-preserving Machine Learning as a Service , 2018, Proc. Priv. Enhancing Technol..

[48]  Jie Shen,et al.  Efficient Spectral-Galerkin Method I. Direct Solvers of Second- and Fourth-Order Equations Using Legendre Polynomials , 1994, SIAM J. Sci. Comput..

[49]  Xiaodong Lin,et al.  Enabling Efficient and Geometric Range Query With Access Control Over Encrypted Spatial Data , 2019, IEEE Transactions on Information Forensics and Security.

[50]  Dejing Dou,et al.  Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning , 2017, 2017 IEEE International Conference on Data Mining (ICDM).

[51]  Jonathan Katz,et al.  Rational Secret Sharing, Revisited , 2006, SCN.

[52]  Ruby B. Lee,et al.  Sensitive-Sample Fingerprinting of Deep Neural Networks , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[53]  Gregory W. Wornell,et al.  Efficient homomorphic encryption on integer vectors and its applications , 2014, 2014 Information Theory and Applications Workshop (ITA).

[54]  Yang Zhang,et al.  MLCapsule: Guarded Offline Deployment of Machine Learning as a Service , 2018, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[55]  Shiho Moriai,et al.  Privacy-Preserving Deep Learning via Additively Homomorphic Encryption , 2019, 2019 IEEE 26th Symposium on Computer Arithmetic (ARITH).

[56]  Shai Halevi,et al.  Algorithms in HElib , 2014, CRYPTO.

[57]  Le Trieu Phong,et al.  Privacy-Preserving Deep Learning via Weight Transmission , 2018, IEEE Transactions on Information Forensics and Security.

[58]  Ruby B. Lee,et al.  VerIDeep: Verifying Integrity of Deep Neural Networks through Sensitive-Sample Fingerprinting , 2018, ArXiv.

[59]  Vitaly Shmatikov,et al.  Privacy-preserving deep learning , 2015, 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[60]  Refik Molva,et al.  Efficient Proof Composition for Verifiable Computation , 2018, ESORICS.

[61]  Hao Ren,et al.  DNA Similarity Search With Access Control Over Encrypted Cloud Data , 2022, IEEE Transactions on Cloud Computing.

[62]  Pengtao Xie,et al.  Crypto-Nets: Neural Networks over Encrypted Data , 2014, ArXiv.

[63]  Pan Li,et al.  SecureNets: Secure Inference of Deep Neural Networks on an Untrusted Cloud , 2018, ACML.

[64]  Farinaz Koushanfar,et al.  XONN: XNOR-based Oblivious Deep Neural Network Inference , 2019, IACR Cryptol. ePrint Arch..

[65]  Chang Liu,et al.  Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).