Making Targeted Black-box Evasion Attacks Effective and Efficient
暂无分享,去创建一个
N. Asokan | Mika Juuti | Buse Gul Atli | B. Atli | N. Asokan | Mika Juuti
[1] Jinfeng Yi,et al. Towards Query Efficient Black-box Attacks: An Input-free Perspective , 2018, AISec@CCS.
[2] Lijun Zhang,et al. Query-Efficient Black-Box Attack by Active Learning , 2018, 2018 IEEE International Conference on Data Mining (ICDM).
[3] Fan Zhang,et al. Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.
[4] Ting Wang,et al. Model-Reuse Attacks on Deep Learning Systems , 2018, CCS.
[5] Moustapha Cissé,et al. Countering Adversarial Images using Input Transformations , 2018, ICLR.
[6] Jun Zhu,et al. Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[7] Jinfeng Yi,et al. AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks , 2018, AAAI.
[8] Fabio Roli,et al. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization , 2017, AISec@CCS.
[9] Pascal Frossard,et al. Classification regions of deep neural networks , 2017, ArXiv.
[10] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Logan Engstrom,et al. Black-box Adversarial Attacks with Limited Queries and Information , 2018, ICML.
[12] Lujo Bauer,et al. On the Suitability of Lp-Norms for Creating and Preventing Adversarial Examples , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[13] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[14] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[15] Aleksander Madry,et al. A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations , 2017, ArXiv.
[16] Mani Srivastava,et al. GenAttack: practical black-box attacks with gradient-free optimization , 2018, GECCO.
[17] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[18] Huichen Lihuichen. DECISION-BASED ADVERSARIAL ATTACKS: RELIABLE ATTACKS AGAINST BLACK-BOX MACHINE LEARNING MODELS , 2017 .
[19] Hao Chen,et al. MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.
[20] Dawn Xiaodong Song,et al. Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.
[21] Tom Schaul,et al. Natural Evolution Strategies , 2008, 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence).
[22] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[23] Forrest N. Iandola,et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.
[24] Michael Naehrig,et al. CryptoNets: applying neural networks to encrypted data with high throughput and accuracy , 2016, ICML 2016.
[25] Nina Narodytska,et al. Simple Black-Box Adversarial Attacks on Deep Neural Networks , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[26] Ben Y. Zhao,et al. With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning , 2018, USENIX Security Symposium.
[27] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[28] Anantha Chandrakasan,et al. Gazelle: A Low Latency Framework for Secure Neural Network Inference , 2018, IACR Cryptol. ePrint Arch..
[29] Kevin P. Murphy,et al. Machine learning - a probabilistic perspective , 2012, Adaptive computation and machine learning series.
[30] Samuel Marchal,et al. PRADA: Protecting Against DNN Model Stealing Attacks , 2018, 2019 IEEE European Symposium on Security and Privacy (EuroS&P).
[31] Geoffrey E. Hinton,et al. Deep Learning , 2015, Nature.
[32] Eric Keller,et al. Stochastic Substitute Training: A Gray-box Approach to Craft Adversarial Examples Against Gradient Obfuscation Defenses , 2018, AISec@CCS.
[33] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[34] Alois Knoll,et al. Copy and Paste: A Simple But Effective Initialization Method for Black-Box Adversarial Attacks , 2019, CVPR 2019.
[35] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[36] Aleksander Madry,et al. Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors , 2018, ICLR.
[37] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[38] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[39] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[40] Jinfeng Yi,et al. ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models , 2017, AISec@CCS.
[41] George Adam,et al. Reducing Adversarial Example Transferability Using Gradient Regularization , 2019, ArXiv.
[42] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[43] Alan L. Yuille,et al. Mitigating adversarial effects through randomization , 2017, ICLR.
[44] Yao Lu,et al. Oblivious Neural Network Predictions via MiniONN Transformations , 2017, IACR Cryptol. ePrint Arch..