Invisible Poisoning: Highly Stealthy Targeted Poisoning Attack

Deep learning is widely applied to various areas for its great performance. However, it is vulnerable to adversarial attacks and poisoning attacks, which arouses a lot of concerns. A number of attack methods and defense strategies have been proposed, most of which focus on adversarial attacks that happen in the testing process. Poisoning attacks, using poisoned-training data to attack deep learning models, are more difficult to defend since the models heavily depend on the training data and strategies to guarantee their performances. Generally, poisoning attacks are conducted by leveraging benign examples with poisoned labels or poison-training examples with benign labels. Both cases are easy to detect. In this paper, we propose a novel poisoning attack named Invisible Poisoning Attack (IPA). In IPA, we use highly stealthy poison-training examples with benign labels, perceptually similar to their benign counterparts, to train the deep learning model. During the testing process, the poisoned model will handle the benign examples correctly, while output erroneous results when fed by the target benign examples (poisoning-trigger examples). We adopt the Non-dominated Sorting Genetic Algorithm (NSGA-II) as the optimizer for evolving the highly stealthy poison-training examples. The generated approximate optimal examples are promised to be both invisible and effective in attacking the target model. We verify the effectiveness of IPA against face recognition systems on different face datasets, including attack ability, stealthiness, and transferability performance.

[1]  Michael Unser,et al.  Convolutional Neural Networks for Inverse Problems in Imaging: A Review , 2017, IEEE Signal Processing Magazine.

[2]  Ajmal Mian,et al.  Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.

[3]  Farinaz Koushanfar,et al.  DeepMarks: A Digital Fingerprinting Framework for Deep Neural Networks , 2018, IACR Cryptol. ePrint Arch..

[4]  Kouichi Sakurai,et al.  One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.

[5]  Yurong Liu,et al.  A survey of deep neural network architectures and their applications , 2017, Neurocomputing.

[6]  Kalyanmoy Deb,et al.  A fast and elitist multiobjective genetic algorithm: NSGA-II , 2002, IEEE Trans. Evol. Comput..

[7]  Percy Liang,et al.  Certified Defenses for Data Poisoning Attacks , 2017, NIPS.

[8]  Farinaz Koushanfar,et al.  DeepMarks: A Secure Fingerprinting Framework for Digital Rights Management of Deep Learning Models , 2019, ICMR.

[9]  Yevgeniy Vorobeychik,et al.  Data Poisoning Attacks on Factorization-Based Collaborative Filtering , 2016, NIPS.

[10]  Reza Moradi,et al.  Multi-objective optimization of cost and thermal performance of double walled carbon nanotubes/water nanofluids by NSGA-II using response surface method , 2017 .

[11]  Andrew Zisserman,et al.  Deep Face Recognition , 2015, BMVC.

[12]  Kalyanmoy Deb,et al.  Muiltiobjective Optimization Using Nondominated Sorting in Genetic Algorithms , 1994, Evolutionary Computation.

[13]  Marcus Liwicki,et al.  Are You Tampering With My Data? , 2018, ECCV Workshops.

[14]  Yitao Yang,et al.  Multiview Transfer Learning for Software Defect Prediction , 2019, IEEE Access.

[15]  Sadegh Hesari,et al.  Planned production of thermal units for reducing the emissions and costs using the improved NSGA II method , 2019 .

[16]  Erik Cambria,et al.  Recent Trends in Deep Learning Based Natural Language Processing , 2017, IEEE Comput. Intell. Mag..

[17]  Ankur Srivastava,et al.  Neural Trojans , 2017, 2017 IEEE International Conference on Computer Design (ICCD).

[18]  Shin'ichi Satoh,et al.  Embedding Watermarks into Deep Neural Networks , 2017, ICMR.

[19]  Gang Liu,et al.  Thermo-economic multi-objective optimization for a solar-dish Brayton system using NSGA-II and decision making , 2015 .

[20]  Jinyin Chen,et al.  DGEPN-GCEN2V: a new framework for mining GGI and its application in biomarker detection , 2019, Science China Information Sciences.

[21]  Ting Wang,et al.  SirenAttack: Generating Adversarial Audio for End-to-End Acoustic Systems , 2019, AsiaCCS.

[22]  Yitao Yang,et al.  Collective transfer learning for defect prediction , 2020, Neurocomputing.

[23]  T. Vo-Duy,et al.  Multi-objective optimization of laminated composite beam structures using NSGA-II algorithm , 2017 .

[24]  James Philbin,et al.  FaceNet: A unified embedding for face recognition and clustering , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[25]  Jaechang Nam,et al.  Deep Semantic Feature Learning for Software Defect Prediction , 2020, IEEE Transactions on Software Engineering.

[26]  Fei Yang,et al.  Mining API usage change rules for software framework evolution , 2018, Sci. China Inf. Sci..

[27]  Yi Li,et al.  Gene expression inference with deep learning , 2015, bioRxiv.

[28]  Brendan Dolan-Gavitt,et al.  Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks , 2018, RAID.

[29]  Tal Hassner,et al.  Face recognition in unconstrained videos with matched background similarity , 2011, CVPR 2011.

[30]  Prateek Saxena,et al.  Auror: defending against poisoning attacks in collaborative deep learning systems , 2016, ACSAC.

[31]  Wen-Chuan Lee,et al.  Trojaning Attack on Neural Networks , 2018, NDSS.

[32]  Alireza Maheri,et al.  Multi-objective design under uncertainties of hybrid renewable energy system using NSGA-II and chance constrained programming , 2016 .

[33]  Benny Pinkas,et al.  Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring , 2018, USENIX Security Symposium.

[34]  Sérgio Haffner,et al.  Multiobjective Optimization of Five-Phase Induction Machines Based on NSGA-II , 2017, IEEE Transactions on Industrial Electronics.