Many machine learning methods make assumptions about the data, such as, data stationarity and data independence, for an efficient learning process that requires less data. However, these assumptions may give rise to vulnerabilities if violated by smart adversaries. In this paper, we propose a novel algorithm to craft the input samples by modifying a certain fraction of input features as small as in order to bypass the decision boundary of widely used binary classifiers using Support Vector Machine (SVM). We show that our algorithm can reliably produce adversarial samples which are misclassified with 98% success rate while modifying 22% of the input features on average. Our goal is to evaluate the robustness of classification algorithms for high demensional network data by intentionally performing evasion attacks with carefully designed adversarial examples. The proposed algorithm is evaluated using real network traffic datasets (CAIDA 2007 and CAIDA 2016).
[1]
Yanjun Qi,et al.
Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers
,
2016,
NDSS.
[2]
Fabio Roli,et al.
Security Evaluation of Pattern Classifiers under Attack
,
2014,
IEEE Transactions on Knowledge and Data Engineering.
[3]
Ananthram Swami,et al.
The Limitations of Deep Learning in Adversarial Settings
,
2015,
2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[4]
Patrick P. K. Chan,et al.
Adversarial Feature Selection Against Evasion Attacks
,
2016,
IEEE Transactions on Cybernetics.
[5]
Gang Wang,et al.
Man vs. Machine: Practical Adversarial Detection of Malicious Crowdsourcing Workers
,
2014,
USENIX Security Symposium.