Dynamic Backdoor Attacks Against Machine Learning Models
暂无分享,去创建一个
Yang Zhang | M. Backes | Shiqing Ma | A. Salem | Rui Wen
[1] S. Nelson,et al. Resolving Individuals Contributing Trace Amounts of DNA to Highly Complex Mixtures Using High-Density SNP Genotyping Microarrays , 2008, PLoS genetics.
[2] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[3] D. Halbe,et al. “Who’s there?” , 2012 .
[4] Somesh Jha,et al. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing , 2014, USENIX Security Symposium.
[5] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[6] Yevgeniy Vorobeychik,et al. Optimal randomized classification in adversarial settings , 2014, AAMAS.
[7] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[8] Yevgeniy Vorobeychik,et al. Scalable Optimization of Randomized Operational Decisions in Adversarial Classification Settings , 2015, AISTATS.
[9] Somesh Jha,et al. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.
[10] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[11] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[12] Fan Zhang,et al. Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.
[13] Seong Joon Oh,et al. Adversarial Image Perturbation for Privacy Protection A Game Theory Perspective , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[14] Ramprasaath R. Selvaraju,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[15] Giuseppe Ateniese,et al. Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning , 2017, CCS.
[16] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[17] Brendan Dolan-Gavitt,et al. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.
[18] Carl A. Gunter,et al. Towards Measuring Membership Privacy , 2017, ArXiv.
[19] Emiliano De Cristofaro,et al. LOGAN: Evaluating Privacy Leakage of Generative Models Using Generative Adversarial Networks , 2017, ArXiv.
[20] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[21] Vitaly Shmatikov,et al. Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[22] Wen-Chuan Lee,et al. Trojaning Attack on Neural Networks , 2018, NDSS.
[23] Reza Shokri,et al. Machine Learning with Membership Privacy using Adversarial Regularization , 2018, CCS.
[24] Binghui Wang,et al. Stealing Hyperparameters in Machine Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[25] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[26] Vitaly Shmatikov,et al. The Natural Auditor: How To Tell If Someone Used Your Words To Train Their Model , 2018, ArXiv.
[27] Jinyuan Jia,et al. AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning , 2018, USENIX Security Symposium.
[28] Nikita Borisov,et al. Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations , 2018, CCS.
[29] Yang Zhang,et al. Tagvisor: A Privacy Advisor for Sharing Hashtags , 2018, WWW.
[30] Kai Chen,et al. Understanding Membership Inferences on Well-Generalized Learning Models , 2018, ArXiv.
[31] Tudor Dumitras,et al. When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks , 2018, USENIX Security Symposium.
[32] Emiliano De Cristofaro,et al. Knock Knock, Who's There? Membership Inference on Aggregate Location Data , 2017, NDSS.
[33] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[34] Tudor Dumitras,et al. Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks , 2018, NeurIPS.
[35] Somesh Jha,et al. Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting , 2017, 2018 IEEE 31st Computer Security Foundations Symposium (CSF).
[36] Seong Joon Oh,et al. Towards Reverse-Engineering Black-Box Neural Networks , 2017, ICLR.
[37] Chang Liu,et al. Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[38] Emiliano De Cristofaro,et al. Under the Hood of Membership Inference Attacks on Aggregate Location Time-Series , 2019, ArXiv.
[39] Tribhuvanesh Orekondy,et al. Knockoff Nets: Stealing Functionality of Black-Box Models , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[40] Vitaly Shmatikov,et al. Exploiting Unintended Feature Leakage in Collaborative Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[41] Ben Y. Zhao,et al. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks , 2019, 2019 IEEE Symposium on Security and Privacy (SP).
[42] Yang Zhang,et al. MBeacon: Privacy-Preserving Beacons for DNA Methylation Data , 2019, NDSS.
[43] Damith Chinthana Ranasinghe,et al. STRIP: a defence against trojan attacks on deep neural networks , 2019, ACSAC.
[44] N. Gong,et al. MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples , 2019, CCS.
[45] Mario Fritz,et al. ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models , 2018, NDSS.
[46] Ben Y. Zhao,et al. Latent Backdoor Attacks on Deep Neural Networks , 2019, CCS.
[47] Xiangyu Zhang,et al. ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation , 2019, CCS.
[48] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[49] Amir Houmansadr,et al. Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[50] Vitaly Shmatikov,et al. Auditing Data Provenance in Text-Generation Models , 2018, KDD.
[51] Kartik Sreenivasan,et al. Attack of the Tails: Yes, You Really Can Backdoor Federated Learning , 2020, NeurIPS.
[52] Mario Fritz,et al. GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models , 2019, CCS.
[53] Anh Tran,et al. Input-Aware Dynamic Backdoor Attack , 2020, NeurIPS.
[54] Fan Yang,et al. An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks , 2020, KDD.
[55] Bao Gia Doan,et al. Februus: Input Purification Defence Against Trojan Attacks on Deep Neural Network Systems , 2019, 1908.03369.
[56] H. Pirsiavash,et al. Hidden Trigger Backdoor Attacks , 2019, AAAI.
[57] Yang Zhang,et al. Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning , 2019, USENIX Security Symposium.
[58] James Bailey,et al. Clean-Label Backdoor Attacks on Video Recognition Models , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[59] Yunfei Liu,et al. Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks , 2020, ECCV.
[60] Deliang Fan,et al. TBT: Targeted Neural Network Attack With Bit Trojan , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[61] Yang Zhang,et al. Model Stealing Attacks Against Inductive Graph Neural Networks , 2021, 2022 IEEE Symposium on Security and Privacy (SP).
[62] Neil Zhenqiang Gong,et al. BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning , 2021, 2022 IEEE Symposium on Security and Privacy (SP).
[63] Michael Backes,et al. Stealing Links from Graph Neural Networks , 2020, USENIX Security Symposium.
[64] Emiliano De Cristofaro,et al. ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models , 2021, USENIX Security Symposium.
[65] Nikita Borisov,et al. Detecting AI Trojans Using Meta Neural Analysis , 2019, 2021 IEEE Symposium on Security and Privacy (SP).
[66] Yufei Chen,et al. Property Inference Attacks Against GANs , 2021, NDSS.
[67] Yang Zhang,et al. Quantifying and Mitigating Privacy Risks of Contrastive Learning , 2021, CCS.
[68] Michael Backes,et al. BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements , 2020, ACSAC.
[69] Michael Backes,et al. Get a Model! Model Hijacking Attack Against Machine Learning Models , 2021, NDSS.