暂无分享,去创建一个
Prateek Mittal | Daniel Cullina | Arjun Nitin Bhagoji | A. Bhagoji | Prateek Mittal | Daniel Cullina
[1] Davide Anguita,et al. A Public Domain Dataset for Human Activity Recognition using Smartphones , 2013, ESANN.
[2] Ming Yang,et al. DeepFace: Closing the Gap to Human-Level Performance in Face Verification , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[3] Jack W. Stokes,et al. Large-scale malware classification using random projections and neural networks , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
[4] Ling Huang,et al. Stealthy poisoning attacks on PCA-based anomaly detectors , 2009, SIGMETRICS Perform. Evaluation Rev..
[5] Gordon V. Cormack,et al. Email Spam Filtering: A Systematic Review , 2008, Found. Trends Inf. Retr..
[6] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[7] Marius Kloft,et al. Online Anomaly Detection under Adversarial Impact , 2010, AISTATS.
[8] Zoubin Ghahramani,et al. A study of the effect of JPG compression on adversarial images , 2016, ArXiv.
[9] Jürgen Schmidhuber,et al. Multi-column deep neural network for traffic sign classification , 2012, Neural Networks.
[10] Pavel Laskov,et al. Hidost: a static machine-learning-based detector of malicious files , 2016, EURASIP J. Inf. Secur..
[11] Fabio Roli,et al. Secure Kernel Machines against Evasion Attacks , 2016, AISec@CCS.
[12] A. Atiya,et al. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond , 2005, IEEE Transactions on Neural Networks.
[13] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[14] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[15] Robert L. Grossman,et al. Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining , 2005, KDD 2005.
[16] J. Doug Tygar,et al. Evasion and Hardening of Tree Ensemble Classifiers , 2015, ICML.
[17] Patrick P. K. Chan,et al. Adversarial Feature Selection Against Evasion Attacks , 2016, IEEE Transactions on Cybernetics.
[18] Heikki Mannila,et al. Random projection in dimensionality reduction: applications to image and text data , 2001, KDD '01.
[19] Yanjun Qi,et al. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers , 2016, NDSS.
[20] Dale Schuurmans,et al. Learning with a Strong Adversary , 2015, ArXiv.
[21] Christopher Meek,et al. Adversarial learning , 2005, KDD '05.
[22] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[23] Marius Kloft,et al. A framework for quantitative security analysis of machine learning , 2009, AISec '09.
[24] Qi Zhao,et al. Foveation-based Mechanisms Alleviate Adversarial Examples , 2015, ArXiv.
[25] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[26] Pascal Frossard,et al. Analysis of classifiers’ robustness to adversarial perturbations , 2015, Machine Learning.
[27] Uri Shaham,et al. Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization , 2015, ArXiv.
[28] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[29] Blaine Nelson,et al. Adversarial machine learning , 2019, AISec '11.
[30] Jonathon Shlens,et al. A Tutorial on Principal Component Analysis , 2014, ArXiv.
[31] Ling Huang,et al. Near-Optimal Evasion of Convex-Inducing Classifiers , 2010, AISTATS.
[32] Seyed-Mohsen Moosavi-Dezfooli,et al. Robustness of classifiers: from adversarial to random noise , 2016, NIPS.
[33] John Salvatier,et al. Theano: A Python framework for fast computation of mathematical expressions , 2016, ArXiv.
[34] Yann LeCun,et al. The mnist database of handwritten digits , 2005 .
[35] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[36] Qiyang Zhao,et al. Suppressing the Unusual: towards Robust CNNs using Symmetric Activation Functions , 2016, ArXiv.
[37] Blaine Nelson,et al. Can machine learning be secure? , 2006, ASIACCS '06.
[38] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[39] Ananthram Swami,et al. Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples , 2016, ArXiv.
[40] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[41] Eric O. Postma,et al. Dimensionality Reduction: A Comparative Review , 2008 .
[42] Colin Raffel,et al. Lasagne: First release. , 2015 .
[43] Jason Weston,et al. Natural Language Processing (Almost) from Scratch , 2011, J. Mach. Learn. Res..
[44] Angelos Stavrou,et al. When a Tree Falls: Using Diversity in Ensemble Classifiers to Identify Evasion in Malware Detectors , 2016, NDSS.
[45] Konrad Rieck,et al. DREBIN: Effective and Explainable Detection of Android Malware in Your Pocket , 2014, NDSS.
[46] Luca Rigazio,et al. Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.
[47] Gaël Varoquaux,et al. Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..
[48] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[49] Ling Huang,et al. ANTIDOTE: understanding and defending against poisoning of anomaly detectors , 2009, IMC '09.
[50] Micah Sherr,et al. Hidden Voice Commands , 2016, USENIX Security Symposium.
[51] Wenbo Guo,et al. Random Feature Nullification for Adversary Resistant Deep Architecture , 2016, ArXiv.
[52] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[53] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[54] Fabio Roli,et al. Security Evaluation of Support Vector Machines in Adversarial Environments , 2014, ArXiv.
[55] Lewis D. Griffin,et al. A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples , 2016, ArXiv.
[56] David A. Wagner,et al. Spoofing 2D Face Detection: Machines See People Who Aren't There , 2016, ArXiv.
[57] Pedro M. Domingos,et al. Adversarial classification , 2004, KDD.
[58] Kevin Gimpel,et al. Visible Progress on Adversarial Images and a New Saliency Map , 2016, ArXiv.
[59] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).