Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms
暂无分享,去创建一个
Dawn Xiaodong Song | Bo Li | Arjun Nitin Bhagoji | Warren He | D. Song | A. Bhagoji | Bo Li | Warren He | Bo Li
[1] F. B. Hildebrand. Advanced Calculus for Applications , 1962 .
[2] J. Spall. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation , 1992 .
[3] Stephen J. Wright,et al. Numerical Optimization , 2018, Fundamental Statistical Inference.
[4] D K Smith,et al. Numerical Optimization , 2001, J. Oper. Res. Soc..
[5] James C. Spall,et al. Introduction to stochastic search and optimization - estimation, simulation, and control , 2003, Wiley-Interscience series in discrete mathematics and optimization.
[6] James C. Spall,et al. Introduction to stochastic search and optimization - estimation, simulation, and control , 2003, Wiley-Interscience series in discrete mathematics and optimization.
[7] Yann LeCun,et al. The mnist database of handwritten digits , 2005 .
[8] Riccardo Poli,et al. Particle swarm optimization , 1995, Swarm Intelligence.
[9] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[10] Geoffrey I. Webb,et al. Encyclopedia of Machine Learning , 2011, Encyclopedia of Machine Learning.
[11] Ling Huang,et al. Query Strategies for Evading Convex-Inducing Classifiers , 2010, J. Mach. Learn. Res..
[12] Jonathon Shlens,et al. A Tutorial on Principal Component Analysis , 2014, ArXiv.
[13] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[14] Luca Rigazio,et al. Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.
[15] Geoffrey E. Hinton,et al. Deep Learning , 2015, Nature.
[16] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[17] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[18] Nina Narodytska,et al. Simple Black-Box Adversarial Perturbations for Deep Networks , 2016, ArXiv.
[19] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[21] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[22] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[23] Ananthram Swami,et al. Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples , 2016, ArXiv.
[24] Yanjun Qi,et al. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers , 2016, NDSS.
[25] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[26] Dan Boneh,et al. The Space of Transferable Adversarial Examples , 2017, ArXiv.
[27] Huichen Lihuichen. DECISION-BASED ADVERSARIAL ATTACKS: RELIABLE ATTACKS AGAINST BLACK-BOX MACHINE LEARNING MODELS , 2017 .
[28] Xi Chen,et al. Evolution Strategies as a Scalable Alternative to Reinforcement Learning , 2017, ArXiv.
[29] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[30] Dawn Xiaodong Song,et al. Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.
[31] Hung Dang,et al. Evading Classifiers by Morphing in the Dark , 2017, CCS.
[32] Jinfeng Yi,et al. ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models , 2017, AISec@CCS.
[33] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[34] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[35] Pascal Frossard,et al. Analysis of classifiers’ robustness to adversarial perturbations , 2015, Machine Learning.
[36] Nina Narodytska,et al. Simple Black-Box Adversarial Attacks on Deep Neural Networks , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[37] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[38] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[39] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[40] Logan Engstrom,et al. Black-box Adversarial Attacks with Limited Queries and Information , 2018, ICML.