暂无分享,去创建一个
[1] Debdeep Mukhopadhyay,et al. How Secure are Deep Learning Algorithms from Side-Channel based Reverse Engineering?* , 2019, 2019 56th ACM/IEEE Design Automation Conference (DAC).
[2] Anca D. Dragan,et al. Model Reconstruction from Model Explanations , 2018, FAT.
[3] Samuel Marchal,et al. PRADA: Protecting Against DNN Model Stealing Attacks , 2018, 2019 IEEE European Symposium on Security and Privacy (EuroS&P).
[4] Vasisht Duddu,et al. A Survey of Adversarial Machine Learning in Cyber Warfare , 2018, Defence Science Journal.
[5] Benjamin Edwards,et al. Defending Against Model Stealing Attacks Using Deceptive Perturbations , 2018, ArXiv.
[6] Marten van Dijk,et al. Revisiting Definitional Foundations of Oblivious RAM for Secure Processor Implementations , 2017, ArXiv.
[7] David M. Brooks,et al. Applied Machine Learning at Facebook: A Datacenter Infrastructure Perspective , 2018, 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA).
[8] Josep Torrellas,et al. Cache Telepathy: Leveraging Shared Resource Attacks to Learn DNN Architectures , 2018, USENIX Security Symposium.
[9] Mario Fritz,et al. ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models , 2018, NDSS.
[10] Tribhuvanesh Orekondy,et al. Knockoff Nets: Stealing Functionality of Black-Box Models , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Paul C. Kocher,et al. Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems , 1996, CRYPTO.
[12] Binghui Wang,et al. Stealing Hyperparameters in Machine Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[13] Vivienne Sze,et al. Efficient Processing of Deep Neural Networks: A Tutorial and Survey , 2017, Proceedings of the IEEE.
[14] Fan Zhang,et al. Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.
[15] Dan S. Wallach,et al. Opportunities and Limits of Remote Timing Attacks , 2009, TSEC.
[16] Somesh Jha,et al. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing , 2014, USENIX Security Symposium.
[17] David Brumley,et al. Remote timing attacks are practical , 2003, Comput. Networks.
[18] Xin He,et al. Simple Physical Adversarial Examples against End-to-End Autonomous Driving Models , 2019, 2019 IEEE International Conference on Embedded Software and Systems (ICESS).
[19] Lejla Batina,et al. CSI NN: Reverse Engineering of Neural Network Architectures Through Electromagnetic Side Channel , 2019, USENIX Security Symposium.
[20] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[21] Yang Zhang,et al. MLCapsule: Guarded Offline Deployment of Machine Learning as a Service , 2018, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[22] Rich Caruana,et al. Do Deep Nets Really Need to be Deep? , 2013, NIPS.
[23] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[24] Anantha Chandrakasan,et al. Gazelle: A Low Latency Framework for Secure Neural Network Inference , 2018, IACR Cryptol. ePrint Arch..
[25] Vijay Arya,et al. Model Extraction Warning in MLaaS Paradigm , 2017, ACSAC.
[26] Quoc V. Le,et al. Neural Architecture Search with Reinforcement Learning , 2016, ICLR.
[27] Marc'Aurelio Ranzato,et al. Large Scale Distributed Deep Networks , 2012, NIPS.
[28] Lejla Batina,et al. CSI Neural Network: Using Side-channels to Recover Your Artificial Neural Network Information , 2018, IACR Cryptol. ePrint Arch..
[29] Somesh Jha,et al. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.
[30] Farinaz Koushanfar,et al. XONN: XNOR-based Oblivious Deep Neural Network Inference , 2019, IACR Cryptol. ePrint Arch..
[31] Billy Bob Brumley,et al. Remote Timing Attacks Are Still Practical , 2011, ESORICS.
[32] Alberto Ferreira de Souza,et al. Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data , 2018, 2018 International Joint Conference on Neural Networks (IJCNN).
[33] Chang Liu,et al. Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[34] Michael Naehrig,et al. CryptoNets: applying neural networks to encrypted data with high throughput and accuracy , 2016, ICML 2016.
[35] Christopher M. Bishop,et al. Pattern Recognition and Machine Learning (Information Science and Statistics) , 2006 .
[36] Elaine Shi,et al. Path ORAM: an extremely simple oblivious RAM protocol , 2012, CCS.
[37] Zhuolin Yang,et al. Characterizing Audio Adversarial Examples Using Temporal Dependency , 2018, ICLR.
[38] C. Dwork,et al. Exposed! A Survey of Attacks on Private Data , 2017, Annual Review of Statistics and Its Application.
[39] Yang Zhang,et al. Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning , 2019, USENIX Security Symposium.
[40] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[41] Nael B. Abu-Ghazaleh,et al. Rendered Insecure: GPU Side Channel Attacks are Practical , 2018, CCS.
[42] Ting Wang,et al. TextBugger: Generating Adversarial Text Against Real-world Applications , 2018, NDSS.
[43] Tudor Dumitras,et al. Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks , 2018, ArXiv.
[44] David Sands,et al. Termination-Insensitive Noninterference Leaks More Than Just a Bit , 2008, ESORICS.
[45] Yuan Xie,et al. Neural Network Model Extraction Attacks in Edge Devices by Hearing Architectural Hints , 2019, ArXiv.
[46] Reza Shokri,et al. Machine Learning with Membership Privacy using Adversarial Regularization , 2018, CCS.
[47] Seong Joon Oh,et al. Towards Reverse-Engineering Black-Box Neural Networks , 2017, ICLR.
[48] Somesh Jha,et al. Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting , 2017, 2018 IEEE 31st Computer Security Foundations Symposium (CSF).
[49] Ronald J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[50] Paul C. Kocher,et al. Differential Power Analysis , 1999, CRYPTO.
[51] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[52] Yanjun Qi,et al. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers , 2016, NDSS.
[53] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[54] Zhiru Zhang,et al. Reverse Engineering Convolutional Neural Networks Through Side-channel Information Leaks , 2018, 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC).
[55] Christopher Meek,et al. Adversarial learning , 2005, KDD '05.
[56] Bo Luo,et al. I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators , 2018, ACSAC.
[57] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[58] Deian Stefan,et al. Addressing covert termination and timing channels in concurrent information flow systems , 2012, ICFP '12.
[59] Sébastien Gambs,et al. Reconstruction Attack through Classifier Analysis , 2012, DBSec.
[60] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[61] Giuseppe Ateniese,et al. Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning , 2017, CCS.
[62] Vitaly Shmatikov,et al. Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[63] Vitaly Shmatikov,et al. Privacy-preserving deep learning , 2015, 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton).