Moving target defense for adaptive adversaries

Machine learning (ML) plays a central role in the solution of many security problems, for example enabling malicious and innocent activities to be rapidly and accurately distinguished and appropriate actions to be taken. Unfortunately, a standard assumption in ML - that the training and test data are identically distributed - is typically violated in security applications, leading to degraded algorithm performance and reduced security. Previous research has attempted to address this challenge by developing ML algorithms which are either robust to differences between training and test data or are able to predict and account for these differences. This paper adopts a different approach, developing a class of moving target (MT) defenses that are difficult for adversaries to reverse-engineer, which in turn decreases the adversaries' ability to generate training/test data differences that benefit them. We leverage the coevolutionary relationship between attackers and defenders to derive a simple, flexible MT defense strategy which is optimal or nearly optimal for a broad range of security problems. Case studies involving two distinct cyber defense applications demonstrate that the proposed MT algorithm outperforms standard static methods, offering effective defense against intelligent, adaptive adversaries.

[1]  Anh Nguyen-Tuong,et al.  Effectiveness of Moving Target Defenses , 2011, Moving Target Defense.

[2]  Richard Colbaugh,et al.  Early warning analysis for social diffusion events , 2010, 2010 IEEE International Conference on Intelligence and Security Informatics.

[3]  Mehran Bozorgi,et al.  Beyond heuristics: learning to classify vulnerabilities and predict exploits , 2010, KDD.

[4]  R. Sanfelice,et al.  Hybrid dynamical systems , 2009, IEEE Control Systems.

[5]  Walmir M. Caminhas,et al.  A review of machine learning approaches to Spam filtering , 2009, Expert Syst. Appl..

[6]  Adam Stotz,et al.  High level information fusion for tracking and projection of multistage cyber attacks , 2009, Inf. Fusion.

[7]  Jeffrey M. Bradshaw,et al.  A human-agent teamwork command and control framework for moving target defense (MTC2) , 2013, CSIIRW '13.

[8]  Tuomas Sandholm,et al.  The State of Solving Large Incomplete-Information Games, and Application to Poker , 2010, AI Mag..

[9]  Richard Colbaugh,et al.  Predictive defense against evolving adversaries , 2012, 2012 IEEE International Conference on Intelligence and Security Informatics.

[10]  Sun-Hee Lim,et al.  Prediction model for botnet-based cyber threats , 2012, 2012 International Conference on ICT Convergence (ICTC).

[11]  Eric R. Ziegel,et al.  The Elements of Statistical Learning , 2003, Technometrics.

[12]  Alexander J. Smola,et al.  Convex Learning with Invariances , 2007, NIPS.

[13]  Richard Colbaugh,et al.  Proactive defense for evolving cyber threats , 2011, Proceedings of 2011 IEEE International Conference on Intelligence and Security Informatics.

[14]  Yulong Zhang,et al.  Incentive Compatible Moving Target Defense against VM-Colocation Attacks in Clouds , 2012, SEC.

[15]  Christopher Krügel,et al.  Nexat: a history-based approach to predict attacker actions , 2011, ACSAC '11.

[16]  Tobias Scheffer,et al.  Nash Equilibria of Static Prediction Games , 2009, NIPS.

[17]  Blaine Nelson,et al.  The security of machine learning , 2010, Machine Learning.

[18]  Michael I. Jordan,et al.  A Robust Minimax Approach to Classification , 2003, J. Mach. Learn. Res..

[19]  Amir Globerson,et al.  Nightmare at test time: robust learning by feature deletion , 2006, ICML.

[20]  Domitilla Del Vecchio,et al.  Safety Control of Hidden Mode Hybrid Systems , 2012, IEEE Transactions on Automatic Control.