Adversarial Regression with Multiple Learners

Despite the considerable success enjoyed by machine learning techniques in practice, numerous studies demonstrated that many approaches are vulnerable to attacks. An important class of such attacks involves adversaries changing features at test time to cause incorrect predictions. Previous investigations of this problem pit a single learner against an adversary. However, in many situations an adversary's decision is aimed at a collection of learners, rather than specifically targeted at each independently. We study the problem of adversarial linear regression with multiple learners. We approximate the resulting game by exhibiting an upper bound on learner loss functions, and show that the resulting game has a unique symmetric equilibrium. We present an algorithm for computing this equilibrium, and show through extensive experiments that equilibrium models are significantly more robust than conventional regularized linear regression.

[1]  Tobias Scheffer,et al.  Stackelberg games for adversarial prediction problems , 2011, KDD.

[2]  N. Higham Analysis of the Cholesky Decomposition of a Semi-definite Matrix , 1990 .

[3]  J. Goodman Note on Existence and Uniqueness of Equilibrium Points for Concave N-Person Games , 1965 .

[4]  Paulo Cortez,et al.  Modeling wine preferences by data mining from physicochemical properties , 2009, Decis. Support Syst..

[5]  Yevgeniy Vorobeychik,et al.  Noncooperatively Optimized Tolerance: Decentralized Strategic Optimization in Complex Systems , 2011, Physical review letters.

[6]  Tobias Scheffer,et al.  Bayesian Games for Adversarial Regression Problems , 2013, ICML.

[7]  Fabio Roli,et al.  Secure Kernel Machines against Evasion Attacks , 2016, AISec@CCS.

[8]  Pedro M. Domingos,et al.  Adversarial classification , 2004, KDD.

[9]  Yevgeniy Vorobeychik,et al.  Feature Cross-Substitution in Adversarial Classification , 2014, NIPS.

[10]  Shie Mannor,et al.  Robust Regression and Lasso , 2008, IEEE Transactions on Information Theory.

[11]  Yevgeniy Vorobeychik,et al.  Multidefender Security Games , 2015, IEEE Intelligent Systems.

[12]  D. Rubinfeld,et al.  Hedonic housing prices and the demand for clean air , 1978 .

[13]  Yevgeniy Vorobeychik,et al.  Scalable Optimization of Randomized Operational Decisions in Adversarial Classification Settings , 2015, AISTATS.

[14]  Christopher Meek,et al.  Adversarial learning , 2005, KDD '05.

[15]  Yevgeniy Vorobeychik,et al.  Multi-Defender Strategic Filtering Against Spear-Phishing Attacks , 2016, AAAI.

[16]  Huan Xu,et al.  A Unified Robust Regression Model for Lasso-like Algorithms , 2013, ICML.

[17]  Daniel M. Reeves,et al.  Notes on Equilibria in Symmetric Games , 2004 .

[18]  Paul Barford,et al.  Data Poisoning Attacks against Autoregressive Models , 2016, AAAI.

[19]  Bhavani M. Thuraisingham,et al.  Adversarial support vector machine learning , 2012, KDD.

[20]  David Stevens,et al.  On the hardness of evading combinations of linear classifiers , 2013, AISec.

[21]  Pavel Laskov,et al.  Practical Evasion of a Learning-Based Classifier: A Case Study , 2014, 2014 IEEE Symposium on Security and Privacy.