Adversarial Machine Learning for Inferring Augmented Cyber Agility Prediction

Security analysts conduct continuous evaluations of cyber-defense tools to keep pace with advanced and persistent threats. Cyber agility has become a critical proactive security resource that makes it possible to measure defense adjustments and reactions to rising threats. Subsequently, machine learning has been applied to support cyber agility prediction as an essential effort to anticipate future security performance. Nevertheless, apt and treacherous actors motivated by economic incentives continue to prevail in circumventing machine learning-based protection tools. Adversarial learning, widely applied to computer security, especially intrusion detection, has emerged as a new area of concern for the recently recognized critical cyber agility prediction. The rationale is, if a sophisticated malicious actor obtains the cyber agility parameters, correct prediction cannot be guaranteed. Unless with a demonstration of white-box attack failures.The challenge lies in recognizing that unconstrained adversaries hold vast potential capabilities. In practice, they could have perfect-knowledge, i.e., a full understanding of the defense tool in use. We address this challenge by proposing an adversarial machine learning approach that achieves accurate cyber agility forecast through mapped nefarious influence on static defense tools metrics. Considering an adversary would aim at influencing perilous confidence in a defense tool, we demonstrate resilient cyber agility prediction through verified attack signatures in dynamic learning windows. After that, we compare cyber agility prediction under negative influence with and without our proposed dynamic learning windows. Our numerical results show the model’s execution degrades without adversarial machine learning. Such a feigned measure of performance could lead to incorrect software security patching.