暂无分享,去创建一个
William L. Hamilton | Gauthier Gidel | Pascal Vincent | Andre Cianflone | Simon Lacoste-Julien | Avishek Joey Bose | Hugo Berrard | Pascal Vincent | S. Lacoste-Julien | Gauthier Gidel | A. Bose | Hugo Berrard | Andre Cianflone | Simon Lacoste-Julien
[1] A. Wald. Statistical Decision Functions Which Minimize the Maximum Risk , 1945 .
[2] K Fan,et al. Minimax Theorems. , 1953, Proceedings of the National Academy of Sciences of the United States of America.
[3] Kurt Hornik,et al. Approximation capabilities of multilayer feedforward networks , 1991, Neural Networks.
[4] Eric van Damme,et al. Non-Cooperative Games , 2000 .
[5] Gaël Varoquaux,et al. Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..
[6] Tobias Scheffer,et al. Stackelberg games for adversarial prediction problems , 2011, KDD.
[7] R. Bass,et al. Review: P. Billingsley, Convergence of probability measures , 1971 .
[8] Tobias Scheffer,et al. Static prediction games for adversarial learning problems , 2012, J. Mach. Learn. Res..
[9] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[10] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[11] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[12] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[13] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[14] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[15] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[16] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[17] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[18] Patrick D. McDaniel,et al. On the (Statistical) Detection of Adversarial Examples , 2017, ArXiv.
[19] Logan Engstrom,et al. Query-Efficient Black-box Adversarial Examples (superceded) , 2017 .
[20] L. Positselski. Abelian right perpendicular subcategories in module categories , 2017, 1705.04960.
[21] Ben Poole,et al. Categorical Reparameterization with Gumbel-Softmax , 2016, ICLR.
[22] Yee Whye Teh,et al. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables , 2016, ICLR.
[23] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[24] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[25] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[26] Logan Engstrom,et al. Query-Efficient Black-box Adversarial Examples , 2017, ArXiv.
[27] Dawn Xiaodong Song,et al. Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.
[28] Jinfeng Yi,et al. ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models , 2017, AISec@CCS.
[29] David A. Wagner,et al. MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples , 2017, ArXiv.
[30] Ian S. Fischer,et al. Adversarial Transformation Networks: Learning to Generate Adversarial Examples , 2017, ArXiv.
[31] Zhitao Gong,et al. Adversarial and Clean Data Are Not Twins , 2017, aiDM@SIGMOD.
[32] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[33] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[34] Lu Sun,et al. A survey of practical adversarial example attacks , 2018, Cybersecur..
[35] Kamyar Azizzadenesheli,et al. Stochastic Activation Pruning for Robust Adversarial Defense , 2018, ICLR.
[36] Sameer Singh,et al. Generating Natural Adversarial Examples , 2017, ICLR.
[37] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[38] Debdeep Mukhopadhyay,et al. Adversarial Attacks and Defences: A Survey , 2018, ArXiv.
[39] Colin Raffel,et al. Thermometer Encoding: One Hot Way To Resist Adversarial Examples , 2018, ICLR.
[40] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[41] Jun Zhu,et al. Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[42] Gauthier Gidel,et al. Parametric Adversarial Divergences are Good Task Losses for Generative Modeling , 2017, ICLR.
[43] Parham Aarabi,et al. Adversarial Attacks on Face Detectors Using Neural Net Based Constrained Optimization , 2018, 2018 IEEE 20th International Workshop on Multimedia Signal Processing (MMSP).
[44] Ian S. Fischer,et al. Learning to Attack: Adversarial Transformation Networks , 2018, AAAI.
[45] Logan Engstrom,et al. Black-box Adversarial Attacks with Limited Queries and Information , 2018, ICML.
[46] Ajmal Mian,et al. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.
[47] Yoshua Bengio,et al. A3T: Adversarially Augmented Adversarial Training , 2018, ArXiv.
[48] Mingyan Liu,et al. Generating Adversarial Examples with Adversarial Networks , 2018, IJCAI.
[49] Moustapha Cissé,et al. Countering Adversarial Images using Input Transformations , 2018, ICLR.
[50] Yang Song,et al. PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples , 2017, ICLR.
[51] Dimitris S. Papailiopoulos,et al. A Geometric Perspective on the Transferability of Adversarial Directions , 2018, AISTATS.
[52] Aleksander Madry,et al. Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors , 2018, ICLR.
[53] William L. Hamilton,et al. Generalizable Adversarial Attacks Using Generative Models , 2019, ArXiv.
[54] William L. Hamilton,et al. Generalizable Adversarial Attacks with Latent Variable Perturbation Modelling , 2019, 1905.10864.
[55] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[56] Luyu Wang,et al. advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch , 2019, ArXiv.
[57] Jinfeng Yi,et al. AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks , 2018, AAAI.
[58] Gauthier Gidel,et al. A Variational Inequality Perspective on Generative Adversarial Networks , 2018, ICLR.
[59] Andrew Gordon Wilson,et al. Simple Black-box Adversarial Attacks , 2019, ICML.
[60] James Bailey,et al. Black-box Adversarial Attacks on Video Recognition Models , 2019, ACM Multimedia.
[61] Tong Zhang,et al. NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks , 2019, ICML.
[62] Jun Zhu,et al. Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[63] Arun Balaji Buduru,et al. A Survey of Black-Box Adversarial Attacks on Computer Vision Models , 2019, 1912.01667.
[64] Aleksander Madry,et al. On Evaluating Adversarial Robustness , 2019, ArXiv.
[65] Yahong Han,et al. Curls & Whey: Boosting Black-Box Adversarial Attacks , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[66] Mark W. Schmidt,et al. Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates , 2019, NeurIPS.
[67] Alan L. Yuille,et al. Improving Transferability of Adversarial Examples With Input Diversity , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[68] Hu Zhang,et al. Query-efficient Meta Attack to Deep Neural Networks , 2019, ICLR.
[69] Matthias Hein,et al. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks , 2020, ICML.
[70] James Bailey,et al. Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets , 2020, ICLR.
[71] Song Bai,et al. Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses , 2019, ECCV.
[72] Tong Zhang,et al. Black-Box Adversarial Attack with Transferable Model-based Embedding , 2019, ICLR.
[73] Yoram Bachrach,et al. Minimax Theorem for Latent Games or: How I Learned to Stop Worrying about Mixed-Nash and Love Neural Nets , 2020, ArXiv.
[74] Song Bai,et al. Learning Transferable Adversarial Examples via Ghost Networks , 2018, AAAI.
[75] Jiliang Tang,et al. Adversarial Attacks and Defenses in Images, Graphs and Text: A Review , 2019, International Journal of Automation and Computing.
[76] Nicolas Flammarion,et al. Square Attack: a query-efficient black-box adversarial attack via random search , 2019, ECCV.
[77] Matthias Hein,et al. Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack , 2019, ICML.
[78] Ruitong Huang,et al. Max-Margin Adversarial (MMA) Training: Direct Input Space Margin Maximization through Adversarial Training , 2018, ICLR.
[79] Pascal Vincent,et al. A Closer Look at the Optimization Landscapes of Generative Adversarial Networks , 2019, ICLR.