Backdoor Suppression in Neural Networks using Input Fuzzing and Majority Voting
暂无分享,去创建一个
[1] Brendan Dolan-Gavitt,et al. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.
[2] Brendan Dolan-Gavitt,et al. Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks , 2018, RAID.
[3] Dawn Xiaodong Song,et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.
[4] Wen-Chuan Lee,et al. Trojaning Attack on Neural Networks , 2018, NDSS.
[5] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[6] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[7] Benny Pinkas,et al. Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring , 2018, USENIX Security Symposium.
[8] Ankur Srivastava,et al. Neural Trojans , 2017, 2017 IEEE International Conference on Computer Design (ICCD).
[9] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[10] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[11] Eric R. Ziegel,et al. The Elements of Statistical Learning , 2003, Technometrics.
[12] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.