A game-theoretic analysis of label flipping attacks on distributed support vector machines

Distributed machine learning algorithms play a significant role in processing massive data sets over large networks. However, the increasing reliance on machine learning on information and communication technologies makes it inherently vulnerable to cyber threats. This work aims to develop secure distributed algorithms to protect the learning from adversaries. We establish a game-theoretic framework to capture the conflicting goals of a learner who uses distributed support vector machines (DSVM) and an attacker who is capable of flipping training labels. We develop a fully distributed and iterative algorithm to capture real-time reactions of the learner at each node to adversarial behaviors. The numerical results show that DSVM is vulnerable to attacks, and their impact has a strong dependence on the network topologies.

[1]  Stephen P. Boyd,et al.  Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..

[2]  Eduardo Camponogara,et al.  Distributed Learning Agents in Urban Traffic Control , 2003, EPIA.

[3]  Jong Sou Park,et al.  Network Security Modeling and Cyber Attack Simulation Methodology , 2001, ACISP.

[4]  Blaine Nelson,et al.  Support Vector Machines Under Adversarial Label Noise , 2011, ACML.

[5]  Zhong Chen,et al.  Securing Peer-to-Peer Content Sharing Service from Poisoning Attacks , 2008, 2008 Eighth International Conference on Peer-to-Peer Computing.

[6]  Richard J. Lipton,et al.  Defense against man-in-the-middle attack in client-server systems , 2001, Proceedings. Sixth IEEE Symposium on Computers and Communications.

[7]  Pietro Michiardi,et al.  Game theoretic analysis of security in mobile ad hoc networks , 2002 .

[8]  H. Nikaidô On von Neumann’s minimax theorem , 1954 .

[9]  Claudia Eckert,et al.  Adversarial Label Flips Attack on Support Vector Machines , 2012, ECAI.

[10]  Blaine Nelson,et al.  Misleading Learners: Co-opting Your Spam Filter , 2009 .

[11]  J.A. Stankovic,et al.  Denial of Service in Sensor Networks , 2002, Computer.

[12]  Amir Globerson,et al.  Nightmare at test time: robust learning by feature deletion , 2006, ICML.

[13]  Jeannette M. Wing,et al.  Game strategies in network security , 2005, International Journal of Information Security.

[14]  Georgios B. Giannakis,et al.  Consensus-Based Distributed Support Vector Machines , 2010, J. Mach. Learn. Res..

[15]  Ohad Shamir,et al.  Learning to classify with missing and corrupted features , 2008, ICML.

[16]  Blaine Nelson,et al.  The security of machine learning , 2010, Machine Learning.

[17]  Rui Zhang,et al.  Secure and resilient distributed machine learning under adversarial environments , 2015, 2015 18th International Conference on Information Fusion (Fusion).

[18]  Tomoyuki Ohta,et al.  Secure decentralized data transfer against node capture attacks for wireless sensor networks , 2009, 2009 International Symposium on Autonomous Decentralized Systems.

[19]  John Langford,et al.  Scaling up machine learning: parallel and distributed approaches , 2011, KDD '11 Tutorials.

[20]  Sanjay Chawla,et al.  A Game Theoretical Model for Adversarial Learning , 2009, 2009 IEEE International Conference on Data Mining Workshops.

[21]  Klaus Fischer,et al.  A Multiagent-Based Peer-to-Peer Network in Java for Distributed Spam Filtering , 2003, CEEMAS.

[22]  Stephen Tyree,et al.  Learning with Marginalized Corrupted Features , 2013, ICML.

[23]  Brian Neil Levine,et al.  A Survey of Solutions to the Sybil Attack , 2006 .

[24]  Chris Clifton,et al.  Classifier evaluation and attribute selection against active adversaries , 2010, Data Mining and Knowledge Discovery.

[25]  Hennie A. Kruger,et al.  Value-focused assessment of ICT security awareness in an academic environment , 2007, Comput. Secur..