Asymmetry in Coevolving Adversarial Systems

Asymmetries in adversarial systems arise from differences in the "situations" of attackers and defenders, for instance corresponding to differences in information access or cost/benefit tradeoffs. While numerous studies have shown that asymmetry is important, less has been done to rigorously characterize its impact or specify methods by which it can be leveraged by defenders. This paper presents a formal framework for analyzing the origins and roles of asymmetric advantage in coevolving adversarial systems, and uses this framework to develop quantitative tools for understanding and exploiting asymmetry. The proposed framework explains why asymmetry has such profound impact on the behavior of coevolving systems, and reveals a key feature of these systems: they can be reverse-engineered using only limited measurements. The analysis yields several new results, including 1.) a demonstration that machine learning systems increasingly deployed to harden vulnerabilities in essential systems are themselves highly vulnerable, and 2.) a methodology for designing "predictability-oriented" defenses that shifts advantages of asymmetry toward defenders. An empirical case study involving detection of money-laundering activity within the global interbank transactions system illustrates the utility of the proposed analytic framework in a high-consequence setting.

[1]  Richard Colbaugh,et al.  Anticipating complex network vulnerabilities through abstraction-based analysis , 2011, Security Informatics.

[2]  B. Malitsky,et al.  NATIONAL ACADEMY OF SCIENCES IS 90 , 2008 .

[3]  Richard Colbaugh,et al.  Early warning analysis for social diffusion events , 2010, 2010 IEEE International Conference on Intelligence and Security Informatics.

[4]  S. Low,et al.  The "robust yet fragile" nature of the Internet. , 2005, Proceedings of the National Academy of Sciences of the United States of America.

[5]  Richard Colbaugh,et al.  Dynamic information-theoretic measures for security informatics , 2013, 2013 IEEE International Conference on Intelligence and Security Informatics.

[6]  Richard Colbaugh,et al.  Moving target defense for adaptive adversaries , 2013, 2013 IEEE International Conference on Intelligence and Security Informatics.

[7]  Walmir M. Caminhas,et al.  A review of machine learning approaches to Spam filtering , 2009, Expert Syst. Appl..

[8]  P. Bak,et al.  Self-organized criticality. , 1988, Physical review. A, General physics.

[9]  Blaine Nelson,et al.  The security of machine learning , 2010, Machine Learning.

[10]  J M Carlson,et al.  Highly optimized tolerance: a mechanism for power laws in designed systems. , 1999, Physical review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics.

[11]  Richard Colbaugh,et al.  Proactive defense for evolving cyber threats , 2011, Proceedings of 2011 IEEE International Conference on Intelligence and Security Informatics.

[12]  Jeffrey M. Bradshaw,et al.  A human-agent teamwork command and control framework for moving target defense (MTC2) , 2013, CSIIRW '13.

[13]  Terence Tao,et al.  The Dantzig selector: Statistical estimation when P is much larger than n , 2005, math/0506081.

[14]  J Doyle,et al.  Highly optimised global organisation of metabolic networks. , 2005, Systems biology.

[15]  Christopher Meek,et al.  Adversarial learning , 2005, KDD '05.

[16]  Richard Colbaugh,et al.  Predictive defense against evolving adversaries , 2012, 2012 IEEE International Conference on Intelligence and Security Informatics.

[17]  Albert-László Barabási,et al.  Error and attack tolerance of complex networks , 2000, Nature.

[18]  Discriminant Subspace,et al.  PATTERN ANALYSIS AND MACHINE INTELLIGENCE A publication of the IEEE Computer Society , 2007 .