Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks
暂无分享,去创建一个
[1] David A. Wagner,et al. Audio Adversarial Examples: Targeted Attacks on Speech-to-Text , 2018, 2018 IEEE Security and Privacy Workshops (SPW).
[2] Ming-Yu Liu,et al. Tactics of Adversarial Attack on Deep Reinforcement Learning Agents , 2017, IJCAI.
[3] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[4] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[5] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[6] Luis Muñoz-González,et al. Label Sanitization against Label Flipping Poisoning Attacks , 2018, Nemesis/UrbReas/SoGood/IWAISe/GDM@PKDD/ECML.
[7] Aleksander Madry,et al. Exploring the Landscape of Spatial Robustness , 2017, ICML.
[8] Forrest N. Iandola,et al. SqueezeDet: Unified, Small, Low Power Fully Convolutional Neural Networks for Real-Time Object Detection for Autonomous Driving , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[9] Ali Farhadi,et al. YOLOv3: An Incremental Improvement , 2018, ArXiv.
[10] Mansoor Alam,et al. A Deep Learning Approach for Network Intrusion Detection System , 2016, EAI Endorsed Trans. Security Safety.
[11] Ali Farhadi,et al. You Only Look Once: Unified, Real-Time Object Detection , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Jorge Nocedal,et al. On the limited memory BFGS method for large scale optimization , 1989, Math. Program..
[13] Demis Hassabis,et al. Mastering the game of Go with deep neural networks and tree search , 2016, Nature.
[14] Geoffrey E. Hinton,et al. Deep Learning , 2015, Nature.
[15] Kaiming He,et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[16] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[17] Ares Lagae,et al. Procedural noise using sparse Gabor convolution , 2009, SIGGRAPH '09.
[18] Michael P. Wellman,et al. SoK: Security and Privacy in Machine Learning , 2018, 2018 IEEE European Symposium on Security and Privacy (EuroS&P).
[19] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[20] Valentin Khrulkov,et al. Art of Singular Vectors and Universal Adversarial Perturbations , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[21] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Blaine Nelson,et al. Exploiting Machine Learning to Subvert Your Spam Filter , 2008, LEET.
[23] Tara N. Sainath,et al. Deep Neural Networks for Acoustic Modeling in Speech Recognition , 2012 .
[24] Nando de Freitas,et al. A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning , 2010, ArXiv.
[25] Xiaojin Zhu,et al. Using Machine Teaching to Identify Optimal Training-Set Attacks on Machine Learners , 2015, AAAI.
[26] Dawn Xiaodong Song,et al. Black-box Attacks on Deep Neural Networks via Gradient Estimation , 2018, ICLR.
[27] Patrick D. McDaniel,et al. Machine Learning in Adversarial Settings , 2016, IEEE Security & Privacy.
[28] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[29] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[30] Suman Jana,et al. DeepTest: Automated Testing of Deep-Neural-Network-Driven Autonomous Cars , 2017, 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE).
[31] Shie Mannor,et al. Robust Logistic Regression and Classification , 2014, NIPS.
[32] Aleksander Madry,et al. On Evaluating Adversarial Robustness , 2019, ArXiv.
[33] Ares Lagae,et al. A Survey of Procedural Noise Functions , 2010, Comput. Graph. Forum.
[34] Jinfeng Yi,et al. Towards Query Efficient Black-box Attacks: An Input-free Perspective , 2018, AISec@CCS.
[35] Konstantin Berlin,et al. Deep neural network based malware detection using two dimensional binary program features , 2015, 2015 10th International Conference on Malicious and Unwanted Software (MALWARE).
[36] Guillermo Sapiro,et al. Robust Large Margin Deep Neural Networks , 2016, IEEE Transactions on Signal Processing.
[37] Nando de Freitas,et al. Taking the Human Out of the Loop: A Review of Bayesian Optimization , 2016, Proceedings of the IEEE.
[38] Carl E. Rasmussen,et al. Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.
[39] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[40] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[41] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[42] Jinfeng Yi,et al. ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models , 2017, AISec@CCS.
[43] Sergey Ioffe,et al. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.
[44] Matthias Bethge,et al. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness , 2018, ICLR.
[45] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[46] Ross B. Girshick,et al. Fast R-CNN , 2015, 1504.08083.
[47] Konstantin Berlin,et al. eXpose: A Character-Level Convolutional Neural Network with Embeddings For Detecting Malicious URLs, File Paths and Registry Keys , 2017, ArXiv.
[48] Je-Won Kang,et al. Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security , 2016, PloS one.
[49] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[50] Carl E. Rasmussen,et al. Sparse Spectrum Gaussian Process Regression , 2010, J. Mach. Learn. Res..
[51] Ming-Yu Liu,et al. FORCEMENT LEARNING AGENTS , 2017 .
[52] Marius Kloft,et al. Security analysis of online centroid anomaly detection , 2010, J. Mach. Learn. Res..
[53] Fabrice Neyret,et al. Understanding and controlling contrast oscillations in stochastic texture algorithms using Spectrum of Variance , 2016 .
[54] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[55] Patrick D. McDaniel,et al. Adversarial Perturbations Against Deep Neural Networks for Malware Classification , 2016, ArXiv.
[56] R. Venkatesh Babu,et al. NAG: Network for Adversary Generation , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[57] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[58] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[59] Patrick D. McDaniel,et al. Making machine learning robust against adversarial inputs , 2018, Commun. ACM.
[60] Ling Huang,et al. Near-Optimal Evasion of Convex-Inducing Classifiers , 2010, AISTATS.
[61] Xiaogang Wang,et al. Deep Convolutional Network Cascade for Facial Point Detection , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.
[62] Pietro Perona,et al. Microsoft COCO: Common Objects in Context , 2014, ECCV.
[63] Sandy H. Huang,et al. Adversarial Attacks on Neural Network Policies , 2017, ICLR.
[64] Xin Zhang,et al. End to End Learning for Self-Driving Cars , 2016, ArXiv.
[65] Pascal Vincent,et al. Contractive Auto-Encoders: Explicit Invariance During Feature Extraction , 2011, ICML.
[66] Ian J. Goodfellow. Defense Against the Dark Arts: An overview of adversarial example security research and future research directions , 2018, ArXiv.
[67] Raja Giryes,et al. Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization , 2018, ECCV.
[68] Percy Liang,et al. Certified Defenses for Data Poisoning Attacks , 2017, NIPS.
[69] Yong Yang,et al. Transferable Adversarial Perturbations , 2018, ECCV.
[70] Ken Perlin,et al. Improving noise , 2002, SIGGRAPH.
[71] Yoshua Bengio,et al. How transferable are features in deep neural networks? , 2014, NIPS.
[72] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[73] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[74] Deborah Silver,et al. Feature Visualization , 1994, Scientific Visualization.
[75] Luis Muñoz-González,et al. Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection , 2018, ArXiv.
[76] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[77] Luis Muñoz-González,et al. The Secret of Machine Learning , 2018 .
[78] Christopher Meek,et al. Adversarial learning , 2005, KDD '05.
[79] Mingyan Liu,et al. Spatially Transformed Adversarial Examples , 2018, ICLR.
[80] Zoubin Ghahramani,et al. Sparse Gaussian Processes using Pseudo-inputs , 2005, NIPS.
[81] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[82] Ali Farhadi,et al. YOLO9000: Better, Faster, Stronger , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[83] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[84] Blaine Nelson,et al. Adversarial machine learning , 2019, AISec '11.
[85] Aleksander Madry,et al. Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors , 2018, ICLR.
[86] Micah Sherr,et al. Hidden Voice Commands , 2016, USENIX Security Symposium.
[87] Matthias Bethge,et al. Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models , 2017, ICLR.
[88] Alex Graves,et al. Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.
[89] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[90] Aleksander Madry,et al. A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations , 2017, ArXiv.
[91] Blaine Nelson,et al. Can machine learning be secure? , 2006, ASIACCS '06.
[92] Mingyan Liu,et al. Generating Adversarial Examples with Adversarial Networks , 2018, IJCAI.
[93] Ivan Laptev,et al. Learning and Transferring Mid-level Image Representations Using Convolutional Neural Networks , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[94] Stefano Ermon,et al. Sparse Gaussian Processes for Bayesian Optimization , 2016, UAI.
[95] Aleksander Madry,et al. Adversarial Examples Are Not Bugs, They Are Features , 2019, NeurIPS.
[96] Chang Liu,et al. Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[97] Ken Perlin,et al. An image synthesizer , 1988 .
[98] Edilson de Aguiar,et al. Facial expression recognition with Convolutional Neural Networks: Coping with few data and the training sample order , 2017, Pattern Recognit..
[99] Jasper Snoek,et al. Practical Bayesian Optimization of Machine Learning Algorithms , 2012, NIPS.
[100] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[101] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[102] Fabio Roli,et al. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization , 2017, AISec@CCS.