Attacks and Defenses towards Machine Learning Based Systems
暂无分享,去创建一个
[1] Vitaly Shmatikov,et al. Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[2] Úlfar Erlingsson,et al. RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response , 2014, CCS.
[3] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[4] Fabio Roli,et al. Multiple classifier systems for robust classifier design in adversarial environments , 2010, Int. J. Mach. Learn. Cybern..
[5] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[6] Sebastian Nowozin,et al. Oblivious Multi-Party Machine Learning on Trusted Processors , 2016, USENIX Security Symposium.
[7] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[8] Ming Li,et al. Learning in the presence of malicious errors , 1993, STOC '88.
[9] Somesh Jha,et al. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.
[10] Martín Abadi,et al. Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data , 2016, ICLR.
[11] Pengtao Xie,et al. Crypto-Nets: Neural Networks over Encrypted Data , 2014, ArXiv.
[12] Ian Goodfellow,et al. Deep Learning with Differential Privacy , 2016, CCS.
[13] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[14] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[15] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[16] Fan Zhang,et al. Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.
[17] Dale Schuurmans,et al. Learning with a Strong Adversary , 2015, ArXiv.
[18] Dawn Xiaodong Song,et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.
[19] Fabio Roli,et al. Bagging Classifiers for Fighting Poisoning Attacks in Adversarial Classification Tasks , 2011, MCS.
[20] David A. Forsyth,et al. SafetyNet: Detecting and Rejecting Adversarial Examples Robustly , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[21] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[22] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[23] Zekeriya Erkin,et al. Generating Private Recommendations Efficiently Using Homomorphic Encryption and Data Packing , 2012, IEEE Transactions on Information Forensics and Security.
[24] Patrick D. McDaniel,et al. Adversarial Perturbations Against Deep Neural Networks for Malware Classification , 2016, ArXiv.
[25] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[26] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[27] Vitaly Shmatikov,et al. Chiron: Privacy-preserving Machine Learning as a Service , 2018, ArXiv.
[28] Blaine Nelson,et al. Can machine learning be secure? , 2006, ASIACCS '06.
[29] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[30] Ling Huang,et al. ANTIDOTE: understanding and defending against poisoning of anomaly detectors , 2009, IMC '09.
[31] Dawn Song,et al. Robust Physical-World Attacks on Deep Learning Models , 2017, 1707.08945.
[32] Michael Naehrig,et al. CryptoNets: applying neural networks to encrypted data with high throughput and accuracy , 2016, ICML 2016.
[33] Blaine Nelson,et al. Adversarial machine learning , 2019, AISec '11.
[34] Brendan Dolan-Gavitt,et al. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.