Backdoor attacks are poisoning attacks and serious threats to deep neural networks. When an adversary mixes poison data into a training dataset, the training dataset is called a poison training dataset. A model trained with the poison training dataset becomes a backdoor model and it achieves high stealthiness and attack-feasibility. The backdoor model classifies only a poison image into an adversarial target class and other images into the correct classes. We propose an additional procedure to our previously proposed countermeasure against backdoor attacks by using knowledge distillation. Our procedure removes poison data from a poison training dataset and recovers the accuracy of the distillation model. Our countermeasure differs from previous ones in that it does not require detecting and identifying backdoor models, backdoor neurons, and poison data. A characteristic assumption in our defense scenario is that the defender can collect clean images without labels. A defender distills clean knowledge from a backdoor model (teacher model) to a distillation model (student model) with knowledge distillation. Subsequently, the defender removes poison-data candidates from the poison training dataset by comparing the predictions of the backdoor and distillation models. The defender fine-tunes the distillation model with the detoxified training dataset to improve classification accuracy. We evaluated our countermeasure by using two datasets. The backdoor is disabled by distillation and fine-tuning further improves the classification accuracy of the distillation model. The fine-tuning model achieved comparable accuracy to a baseline model when the number of clean images for a distillation dataset was more than 13% of the training data. Our results indicate that our countermeasure can be applied for general image-classification tasks and that it works well whether the defender's received training dataset is a poison dataset or not.
[1]
Zachary Chase Lipton,et al.
Born Again Neural Networks
,
2018,
ICML.
[2]
Brendan Dolan-Gavitt,et al.
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
,
2018,
RAID.
[3]
Ben Y. Zhao,et al.
Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
,
2019,
2019 IEEE Symposium on Security and Privacy (SP).
[4]
Johannes Stallkamp,et al.
Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition
,
2012,
Neural Networks.
[5]
Jishen Zhao,et al.
DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks
,
2019,
IJCAI.
[6]
Andrew Zisserman,et al.
Very Deep Convolutional Networks for Large-Scale Image Recognition
,
2014,
ICLR.
[7]
Damith Chinthana Ranasinghe,et al.
STRIP: a defence against trojan attacks on deep neural networks
,
2019,
ACSAC.
[8]
StallkampJ.,et al.
2012 Special Issue
,
2012
.
[9]
Geoffrey E. Hinton,et al.
Distilling the Knowledge in a Neural Network
,
2015,
ArXiv.
[10]
Kota Yoshida,et al.
Countermeasure against Backdoor Attack on Neural Networks Utilizing Knowledge Distillation
,
2020
.
[11]
Brendan Dolan-Gavitt,et al.
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
,
2017,
ArXiv.
[12]
Benjamin Edwards,et al.
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
,
2018,
SafeAI@AAAI.
[13]
Sudipta Chattopadhyay,et al.
Exposing Backdoors in Robust Machine Learning Models
,
2020,
ArXiv.
[14]
Huchuan Lu,et al.
Deep Mutual Learning
,
2017,
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.