A framework for quantitative security analysis of machine learning

We propose a framework for quantitative security analysis of machine learning methods. The key parts of this framework are the formal specification of a deployed learning model and attacker's constraints, the computation of an optimal attack, and the derivation of an upper bound on adversarial impact. We exemplarily apply the framework for the analysis of one specific learning scenario, online centroid anomaly detection, and experimentally verify the tightness of obtained theoretical bounds.