Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks

Indiscriminate data poisoning attacks aim to decrease a model's test accuracy by injecting a small amount of corrupted training data. Despite significant interest, existing attacks remain relatively ineffective against modern machine learning (ML) architectures. In this work, we introduce the notion of model poisoning reachability as a technical tool to explore the intrinsic limits of data poisoning attacks towards target parameters (i.e., model-targeted attacks). We derive an easily computable threshold to establish and quantify a surprising phase transition phenomenon among popular ML models: data poisoning attacks can achieve certain target parameters only when the poisoning ratio exceeds our threshold. Building on existing parameter corruption attacks and refining the Gradient Canceling attack, we perform extensive experiments to confirm our theoretical findings, test the predictability of our transition threshold, and significantly improve existing indiscriminate data poisoning baselines over a range of datasets and models. Our work highlights the critical role played by the poisoning ratio, and sheds new insights on existing empirical results, attacks and mitigation strategies in data poisoning.

[1]  A. Madry,et al.  Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses , 2020, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  A. Blum,et al.  Robustly-reliable learners under poisoning attacks , 2022, COLT.

[3]  A. Madry,et al.  Datamodels: Predicting Predictions from Training Data , 2022, ArXiv.

[4]  Kevin Eykholt,et al.  Benchmarking the Effect of Poisoning Defenses on the Security and Bias of the Final Model , 2022 .

[5]  Yang Liu,et al.  Robust Unlearnable Examples: Protecting Data Privacy Against Adversarial Learning , 2022, ICLR.

[6]  Jonas Geiping,et al.  Adversarial Examples Make Strong Poisons , 2021, NeurIPS.

[7]  Qiang Liu,et al.  MaxUp: Lightweight Adversarial Training with Data Augmentation Improves Neural Network Training , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Jonas Geiping,et al.  Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release , 2021, ArXiv.

[9]  Xingjun Ma,et al.  Unlearnable Examples: Making Personal Data Unexploitable , 2021, ICLR.

[10]  Charles Foster,et al.  The Pile: An 800GB Dataset of Diverse Text for Language Modeling , 2020, ArXiv.

[11]  Giovanni Vigna,et al.  Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability , 2020, 2021 IEEE European Symposium on Security and Privacy (EuroS&P).

[12]  Cong Liu,et al.  Practical Poisoning Attacks on Neural Networks , 2020, ECCV.

[13]  Natalia Gimelshein,et al.  PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.

[14]  Jerry Li,et al.  Sever: A Robust Meta-Algorithm for Stochastic Optimization , 2018, ICML.

[15]  Brendan Dolan-Gavitt,et al.  BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.

[16]  Frank Hutter,et al.  A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets , 2017, ArXiv.

[17]  Percy Liang,et al.  Understanding Black-box Predictions via Influence Functions , 2017, ICML.

[18]  Frank Hutter,et al.  SGDR: Stochastic Gradient Descent with Warm Restarts , 2016, ICLR.

[19]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[20]  M. Verleysen,et al.  Classification in the Presence of Label Noise: A Survey , 2014, IEEE Transactions on Neural Networks and Learning Systems.

[21]  Yaoliang Yu,et al.  A Polynomial-time Form of Robust Regression , 2012, NIPS.

[22]  L. Deng,et al.  The MNIST Database of Handwritten Digit Images for Machine Learning Research [Best of the Web] , 2012, IEEE Signal Processing Magazine.

[23]  Blaine Nelson,et al.  Poisoning Attacks against Support Vector Machines , 2012, ICML.

[24]  Michael I. Jordan,et al.  Convexity, Classification, and Risk Bounds , 2006 .

[25]  Eyal Kushilevitz,et al.  PAC learning with nasty noise , 1999, Theor. Comput. Sci..

[26]  Nicolò Cesa-Bianchi,et al.  Sample-efficient strategies for learning in the presence of noise , 1999, JACM.

[27]  Ming Li,et al.  Learning in the presence of malicious errors , 1993, STOC '88.