暂无分享,去创建一个
Lingjuan Lyu | Xingjun Ma | Bo Li | Yige Li | Nodens Koren | Xixiang Lyu | Yige Li | Nodens Koren | L. Lyu | X. Lyu | Bo Li | Xingjun Ma | Bo Li
[1] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[2] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[3] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[4] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[5] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[7] Razvan Pascanu,et al. Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.
[8] Brendan Dolan-Gavitt,et al. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.
[9] Naiyan Wang,et al. Like What You Like: Knowledge Distill via Neuron Selectivity Transfer , 2017, ArXiv.
[10] Dawn Xiaodong Song,et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.
[11] Matthew Richardson,et al. Do Deep Convolutional Nets Really Need to be Deep and Convolutional? , 2016, ICLR.
[12] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[13] Graham W. Taylor,et al. Improved Regularization of Convolutional Neural Networks with Cutout , 2017, ArXiv.
[14] Nikos Komodakis,et al. Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer , 2016, ICLR.
[15] Wen-Chuan Lee,et al. Trojaning Attack on Neural Networks , 2018, NDSS.
[16] Aleksander Madry,et al. Clean-Label Backdoor Attacks , 2018 .
[17] Wei Liu,et al. Neural Compatibility Modeling with Attentive Knowledge Distillation , 2018, SIGIR.
[18] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[19] Jerry Li,et al. Spectral Signatures in Backdoor Attacks , 2018, NeurIPS.
[20] Brendan Dolan-Gavitt,et al. Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks , 2018, RAID.
[21] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[22] James Bailey,et al. Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality , 2018, ICLR.
[23] Ali Farhadi,et al. Label Refinery: Improving ImageNet Classification through Label Progression , 2018, ArXiv.
[24] Tudor Dumitras,et al. Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks , 2018, NeurIPS.
[25] Yukun Yang,et al. Defending Neural Backdoors via Generative Distribution Modeling , 2019, NeurIPS.
[26] Ben Y. Zhao,et al. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks , 2019, 2019 IEEE Symposium on Security and Privacy (SP).
[27] Benjamin Edwards,et al. Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering , 2018, SafeAI@AAAI.
[28] Damith Chinthana Ranasinghe,et al. STRIP: a defence against trojan attacks on deep neural networks , 2019, ACSAC.
[29] James Bailey,et al. Black-box Adversarial Attacks on Video Recognition Models , 2019, ACM Multimedia.
[30] Jin Young Choi,et al. Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons , 2018, AAAI.
[31] Neil D. Lawrence,et al. Variational Information Distillation for Knowledge Transfer , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[32] Jishen Zhao,et al. DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks , 2019, IJCAI.
[33] Mauro Barni,et al. A New Backdoor Attack in CNNS by Training Set Corruption Without Label Poisoning , 2019, 2019 IEEE International Conference on Image Processing (ICIP).
[34] Ben Y. Zhao,et al. Latent Backdoor Attacks on Deep Neural Networks , 2019, CCS.
[35] James Bailey,et al. On the Convergence and Robustness of Adversarial Training , 2021, ICML.
[36] Shouling Ji,et al. Invisible Poisoning: Highly Stealthy Targeted Poisoning Attack , 2019, Inscrypt.
[37] Yong Jiang,et al. Backdoor Learning: A Survey , 2020, IEEE transactions on neural networks and learning systems.
[38] Vitaly Shmatikov,et al. How To Backdoor Federated Learning , 2018, AISTATS.
[39] Anh Tran,et al. Input-Aware Dynamic Backdoor Attack , 2020, NeurIPS.
[40] James Bailey,et al. Imbalanced Gradients: A New Cause of Overestimated Adversarial Robustness , 2020, ArXiv.
[41] James Bailey,et al. Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[42] James Bailey,et al. Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets , 2020, ICLR.
[43] Brenda Praggastis,et al. Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[44] Heiko Hoffmann,et al. Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[45] Bo Li,et al. DBA: Distributed Backdoor Attacks against Federated Learning , 2020, ICLR.
[46] F. Xavier Roca,et al. Pay Attention to the Activations: A Modular Attention Mechanism for Fine-Grained Image Recognition , 2020, IEEE Transactions on Multimedia.
[47] James Bailey,et al. Improving Adversarial Robustness Requires Revisiting Misclassified Examples , 2020, ICLR.
[48] Philip S. Yu,et al. Privacy and Robustness in Federated Learning: Attacks and Defenses , 2020, IEEE transactions on neural networks and learning systems.
[49] Zhifeng Li,et al. Rethinking the Trigger of Backdoor Attack , 2020, ArXiv.
[50] Sencun Zhu,et al. Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation , 2018, CODASPY.
[51] Yunfei Liu,et al. Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks , 2020, ECCV.
[52] Haihong Tang,et al. Hearing Lips: Improving Lip Reading by Distilling Speech Recognizers , 2019, AAAI.
[53] Karthikeyan Natesan Ramamurthy,et al. Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness , 2020, ICLR.
[54] Minhui Xue,et al. Invisible Backdoor Attacks on Deep Neural Networks Via Steganography and Regularization , 2019, IEEE Transactions on Dependable and Secure Computing.
[55] James Bailey,et al. Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems , 2019, Pattern Recognit..
[56] Qi Li,et al. Removing Backdoor-Based Watermarks in Neural Networks with Limited Data , 2020, 2020 25th International Conference on Pattern Recognition (ICPR).