ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models
暂无分享,去创建一个
Jinfeng Yi | Cho-Jui Hsieh | Huan Zhang | Pin-Yu Chen | Yash Sharma | Yash Sharma | Cho-Jui Hsieh | Pin-Yu Chen | Jinfeng Yi | Huan Zhang
[1] D. Varberg,et al. Calculus with applications , 1991 .
[2] Blaine Nelson,et al. Can machine learning be secure? , 2006, ASIACCS '06.
[3] Blaine Nelson,et al. The security of machine learning , 2010, Machine Learning.
[4] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[5] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[6] Saeed Ghadimi,et al. Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming , 2013, SIAM J. Optim..
[7] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[8] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[9] Eugenio Culurciello,et al. Robust Convolutional Neural Networks under Adversarial Noise , 2015, ArXiv.
[10] Geoffrey E. Hinton,et al. Deep Learning , 2015, Nature.
[11] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[12] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[13] Ananthram Swami,et al. Crafting adversarial input sequences for recurrent neural networks , 2016, MILCOM 2016 - 2016 IEEE Military Communications Conference.
[14] Cho-Jui Hsieh,et al. A Comprehensive Linear Speedup Analysis for Asynchronous Stochastic Parallel Optimization from Zeroth-Order to First-Order , 2016, NIPS.
[15] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[16] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[17] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[18] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[19] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[20] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[21] Patrick D. McDaniel,et al. Adversarial Perturbations Against Deep Neural Networks for Malware Classification , 2016, ArXiv.
[22] Yang Song,et al. Improving the Robustness of Deep Neural Networks via Stability Training , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Patrick D. McDaniel,et al. On the (Statistical) Detection of Adversarial Examples , 2017, ArXiv.
[24] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[25] Valentina Zantedeschi,et al. Efficient Defenses Against Adversarial Attacks , 2017, AISec@CCS.
[26] Yurii Nesterov,et al. Random Gradient-Free Minimization of Convex Functions , 2015, Foundations of Computational Mathematics.
[27] Atul Prakash,et al. Robust Physical-World Attacks on Machine Learning Models , 2017, ArXiv.
[28] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.
[29] Patrick D. McDaniel,et al. Extending Defensive Distillation , 2017, ArXiv.
[30] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[31] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[32] Yanjun Qi,et al. Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples , 2017, ArXiv.
[33] Ying Tan,et al. Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN , 2017, DMBD.
[34] Pascal Frossard,et al. Analysis of universal adversarial perturbations , 2017, ArXiv.
[35] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[36] Dawn Xiaodong Song,et al. Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.
[37] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[38] Zoubin Ghahramani,et al. Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks , 2017, 1707.02476.
[39] Dawn Song,et al. Robust Physical-World Attacks on Deep Learning Models , 2017, 1707.08945.
[40] Min Wu,et al. Safety Verification of Deep Neural Networks , 2016, CAV.
[41] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[42] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[43] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[44] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[45] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[46] Ying Tan,et al. Black-Box Attacks against RNN based Malware Detection Algorithms , 2017, AAAI Workshops.