A Method for Resisting Adversarial Attack on Time Series Classification Model in IoT System

IoT device is often associated with corresponding datasets, algorithms, and infrastructure. However, many potential threats exist in IoT basic infrastructure when deep learning algorithms are applied in these devices. Typically, a deep learning method is widely applied as the basic decision algorithm to classify the time series data, which is an important task in IoT data application. Nevertheless, they are vulnerable to adversarial examples, which bring potential risks to some fields such medical and security in which, a minor disturbing in the time series data could lead to wrong decision. In this paper, we show some white-box attack and random noise attack against time series data. Moreover, we show an adversarial example generated method which only changes one value of the original time series. To resist the adversarial attack, we train an adversarial examples detector to differentiate the adversarial examples from normal examples based on deep features. The adversarial examples detector could filter the adversarial examples before future impair happening. Experiments on UCR data sets show 97% of adversarial examples could be successfully detected generated by two common attack methods: FGSM and BIM.

[1]  Min Wang,et al.  Augmenting The Size of EEG datasets Using Generative Adversarial Networks , 2018, 2018 International Joint Conference on Neural Networks (IJCNN).

[2]  Romain Briandet,et al.  Discrimination of Arabica and Robusta in Instant Coffee by Fourier Transform Infrared Spectroscopy and Chemometrics , 1996 .

[3]  Germain Forestier,et al.  Deep learning for time series classification: a review , 2018, Data Mining and Knowledge Discovery.

[4]  Mohsen Guizani,et al.  Deep Learning for IoT Big Data and Streaming Analytics: A Survey , 2017, IEEE Communications Surveys & Tutorials.

[5]  Colin Raffel,et al.  Thermometer Encoding: One Hot Way To Resist Adversarial Examples , 2018, ICLR.

[6]  Zoubin Ghahramani,et al.  A study of the effect of JPG compression on adversarial images , 2016, ArXiv.

[7]  Agnieszka Nawrocka,et al.  Determination of Food Quality by Using Spectroscopic Methods , 2013 .

[8]  Ting Zhao,et al.  An Anomaly Pattern Detection Method for Sensor Data , 2019, WISA.

[9]  Germain Forestier,et al.  Adversarial Attacks on Deep Neural Networks for Time Series Classification , 2019, 2019 International Joint Conference on Neural Networks (IJCNN).

[10]  Alan L. Yuille,et al.  Adversarial Examples for Semantic Segmentation and Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[11]  Luca Rigazio,et al.  Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.

[12]  Andrew Slavin Ross,et al.  Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients , 2017, AAAI.

[13]  Ananthram Swami,et al.  Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).

[14]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[15]  Zibin Zheng,et al.  Wide and Deep Convolutional Neural Networks for Electricity-Theft Detection to Secure Smart Grids , 2018, IEEE Transactions on Industrial Informatics.

[16]  Pan He,et al.  Adversarial Examples: Attacks and Defenses for Deep Learning , 2017, IEEE Transactions on Neural Networks and Learning Systems.

[17]  Mianxiong Dong,et al.  Learning IoT in Edge: Deep Learning for the Internet of Things with Edge Computing , 2018, IEEE Network.

[18]  Tim Oates,et al.  Time series classification from scratch with deep neural networks: A strong baseline , 2016, 2017 International Joint Conference on Neural Networks (IJCNN).

[19]  Samy Bengio,et al.  Adversarial examples in the physical world , 2016, ICLR.

[20]  Jason Yosinski,et al.  Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).