暂无分享,去创建一个
[1] Pramod K. Varshney,et al. Anomalous Example Detection in Deep Learning: A Survey , 2020, IEEE Access.
[2] Fabian Pedregosa,et al. Hyperparameter optimization with approximate gradient , 2016, ICML.
[3] Lawrence Carin,et al. Second-Order Adversarial Attack and Certifiable Robustness , 2018, ArXiv.
[4] Justin Domke,et al. Generic Methods for Optimization-Based Modeling , 2012, AISTATS.
[5] Pradeep Ravikumar,et al. MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius , 2020, ICLR.
[6] Tom Goldstein,et al. Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates , 2020, ICLR.
[7] Chang Liu,et al. Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[8] Timothy A. Mann,et al. On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models , 2018, ArXiv.
[9] Tudor Dumitras,et al. Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks , 2018, NeurIPS.
[10] Ryan P. Adams,et al. Gradient-based Hyperparameter Optimization through Reversible Learning , 2015, ICML.
[11] Xiaojin Zhu,et al. Using Machine Teaching to Identify Optimal Training-Set Attacks on Machine Learners , 2015, AAAI.
[12] Jonathan F. Bard,et al. Practical Bilevel Optimization: Algorithms and Applications , 1998 .
[13] Tom Goldstein,et al. Transferable Clean-Label Poisoning Attacks on Deep Neural Nets , 2019, ICML.
[14] Greg Yang,et al. Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers , 2019, NeurIPS.
[15] Percy Liang,et al. Certified Defenses for Data Poisoning Attacks , 2017, NIPS.
[16] Suman Jana,et al. Certified Robustness to Adversarial Examples with Differential Privacy , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[17] Fabio Roli,et al. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization , 2017, AISec@CCS.
[18] Aleksander Madry,et al. Clean-Label Backdoor Attacks , 2018 .
[19] Jonas Geiping,et al. MetaPoison: Practical General-purpose Clean-label Data Poisoning , 2020, NeurIPS.
[20] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[21] Paolo Frasconi,et al. Forward and Reverse Gradient-Based Hyperparameter Optimization , 2017, ICML.
[22] Ting Wang,et al. Backdoor attacks against learning systems , 2017, 2017 IEEE Conference on Communications and Network Security (CNS).
[23] Yihan Wang,et al. Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond , 2020, NeurIPS.
[24] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[25] Aditi Raghunathan,et al. Semidefinite relaxations for certifying robustness to adversarial examples , 2018, NeurIPS.
[26] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[27] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[28] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[29] Byron Boots,et al. Truncated Back-propagation for Bilevel Optimization , 2018, AISTATS.
[30] Dawn Xiaodong Song,et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.
[31] Pushmeet Kohli,et al. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks , 2018, ICML.
[32] Po-Sen Huang,et al. Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation , 2019, EMNLP/IJCNLP.
[33] Jihun Hamm,et al. Penalty Method for Inversion-Free Deep Bilevel Optimization , 2019, ArXiv.
[34] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[35] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[36] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.