暂无分享,去创建一个
[1] Angelos Stavrou,et al. When a Tree Falls: Using Diversity in Ensemble Classifiers to Identify Evasion in Malware Detectors , 2016, NDSS.
[2] Leyla Bilge,et al. Before we knew it: an empirical study of zero-day attacks in the real world , 2012, CCS.
[3] Fabio Roli,et al. Security Evaluation of Pattern Classifiers under Attack , 2014, IEEE Transactions on Knowledge and Data Engineering.
[4] Ramana Rao Kompella,et al. PhishNet: Predictive Blacklisting to Detect Phishing Attacks , 2010, 2010 Proceedings IEEE INFOCOM.
[5] Patrick P. K. Chan,et al. Spam filtering for short messages in adversarial environment , 2015, Neurocomputing.
[6] Jun Ho Huh,et al. Phishing Detection with Popular Search Engines: Simple and Effective , 2011, FPS.
[7] David A. Wagner,et al. Mimicry attacks on host-based intrusion detection systems , 2002, CCS '02.
[8] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[9] Edoardo Amaldi,et al. Ectropy of diversity measures for populations in Euclidean space , 2011, Inf. Sci..
[10] Peter E. Hart,et al. Nearest neighbor pattern classification , 1967, IEEE Trans. Inf. Theory.
[11] Mehmed M. Kantardzic,et al. Cracking the Smart ClickBot , 2011, 2011 13th IEEE International Symposium on Web Systems Evolution (WSE).
[12] Fan Zhang,et al. Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.
[13] Myriam Abramson,et al. Toward Adversarial Online Learning and the Science of Deceptive Machines , 2015, AAAI Fall Symposia.
[14] Sergio Pastrana Portillo,et al. Attacks against intrusion detection networks: evasion, reverse engineering and optimal countermeasures , 2014 .
[15] Christopher Meek,et al. Good Word Attacks on Statistical Spam Filters , 2005, CEAS.
[16] Sugata Sanyal,et al. Application Layer Intrusion Detection with Combination of Explicit-Rule- Based and Machine Learning Algorithms and Deployment in Cyber- Defence Program , 2014, ArXiv.
[17] Jingrui He,et al. Nearest-Neighbor-Based Active Learning for Rare Category Detection , 2007, NIPS.
[18] Ling Huang,et al. Query Strategies for Evading Convex-Inducing Classifiers , 2010, J. Mach. Learn. Res..
[19] Christopher Meek,et al. Adversarial learning , 2005, KDD '05.
[20] Claudia Eckert,et al. Support vector machines under adversarial label contamination , 2015, Neurocomputing.
[21] Patrick P. K. Chan,et al. An Improved Reject on Negative Impact Defense , 2014, ICMLC.
[22] M. H. P. Chaves,et al. Exploring the Spam Arms Race to Characterize Spam Evolution , 2010 .
[23] Alvaro A. Cárdenas,et al. Big Data Analytics for Security , 2013, IEEE Security & Privacy.
[24] Žliobait . e,et al. Learning under Concept Drift: an Overview , 2010 .
[25] Darryl D'Souza,et al. Avatar captcha : telling computers and humans apart via face classification and mouse dynamics. , 2014 .
[26] David Haussler,et al. Probably Approximately Correct Learning , 2010, Encyclopedia of Machine Learning.
[27] Fabio Roli,et al. Pattern Recognition Systems under Attack: Design Issues and Research Challenges , 2014, Int. J. Pattern Recognit. Artif. Intell..
[28] Leo Breiman,et al. Random Forests , 2001, Machine Learning.
[29] Shouhuai Xu,et al. An evasion and counter-evasion study in malicious websites detection , 2014, 2014 IEEE Conference on Communications and Network Security.
[30] Mehmed M. Kantardzic,et al. A grid density based framework for classifying streaming data in the presence of concept drift , 2015, Journal of Intelligent Information Systems.
[31] Blaine Nelson,et al. Can machine learning be secure? , 2006, ASIACCS '06.
[32] Gaël Varoquaux,et al. Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..
[33] Ling Huang,et al. Near-Optimal Evasion of Convex-Inducing Classifiers , 2010, AISTATS.
[34] Jin B. Hong,et al. Assessing the Effectiveness of Moving Target Defenses Using Security Models , 2016, IEEE Transactions on Dependable and Secure Computing.
[35] Jianfeng Lu,et al. Active learning via query synthesis and nearest neighbour search , 2015, Neurocomputing.
[36] Ling Huang,et al. Approaches to adversarial drift , 2013, AISec.
[37] Mahdi Zamani,et al. Machine Learning Techniques for Intrusion Detection , 2013, ArXiv.
[38] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[39] Xiangliang Zhang,et al. Adding Robustness to Support Vector Machines Against Adversarial Reverse Engineering , 2014, CIKM.
[40] Mehmed M. Kantardzic,et al. Monitoring Classification Blindspots to Detect Drifts from Unlabeled Data , 2016, 2016 IEEE 17th International Conference on Information Reuse and Integration (IRI).
[41] Joung Woo Ryu,et al. 'Security Theater': On the Vulnerability of Classifiers to Exploratory Attacks , 2017, PAISI.
[42] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[43] Daniel S. Yeung,et al. A Causative Attack Against Semi-supervised Learning , 2014, ICMLC.
[44] C. Causer. The Art of War , 2011, IEEE Potentials.
[45] Nitesh V. Chawla,et al. SMOTE: Synthetic Minority Over-sampling Technique , 2002, J. Artif. Intell. Res..
[46] Jie Chen,et al. Optimal Contraction Theorem for Exploration–Exploitation Tradeoff in Search and Optimization , 2009, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.
[47] Gian Luca Marcialis,et al. Robustness of multi-modal biometric systems under realistic spoof attacks against all traits , 2011, 2011 IEEE Workshop on Biometric Measurements and Systems for Security and Medical Applications (BIOMS).
[48] Bhavani M. Thuraisingham,et al. Adversarial support vector machine learning , 2012, KDD.
[49] Pavel Laskov,et al. Practical Evasion of a Learning-Based Classifier: A Case Study , 2014, 2014 IEEE Symposium on Security and Privacy.
[50] Indre Zliobaite,et al. Learning under Concept Drift: an Overview , 2010, ArXiv.