Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems

Abstract Given their substantial success in addressing a wide range of computer vision challenges, Convolutional Neural Networks (CNNs) are increasingly being used in smart home applications, with many of these applications relying on the automatic recognition of human activities. In this context, low-power radar devices have recently gained in popularity as recording sensors, given that the usage of these devices allows mitigating a number of privacy concerns, a key issue when making use of conventional video cameras. Another concern that is often cited when designing smart home applications is the resilience of these applications against cyberattacks. It is, for instance, well-known that the combination of images and CNNs is vulnerable against adversarial examples, mischievous data points that force machine learning models to generate wrong classifications during testing time. In this paper, we investigate the vulnerability of radar-based CNNs to adversarial attacks, and where these radar-based CNNs have been designed to recognize human gestures. Through experiments with four unique threat models, we show that radar-based CNNs are susceptible to both white- and black-box adversarial attacks. We also expose the existence of an extreme adversarial attack case, where it is possible to change the prediction made by the radar-based CNNs by only perturbing the padding of the inputs, without touching the frames where the action itself occurs. Moreover, we observe that gradient-based attacks exercise perturbation not randomly, but on important features of the input data. We highlight these important features by making use of Grad-CAM, a popular neural network interpretability method, hereby showing the connection between adversarial perturbation and prediction interpretability.

[1]  Carola-Bibiane Schönlieb,et al.  On the Connection Between Adversarial Robustness and Saliency Map Interpretability , 2019, ICML.

[2]  Fabio Roli,et al.  Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.

[3]  Antonio Torralba,et al.  Through-Wall Human Pose Estimation Using Radio Signals , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[4]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[5]  Youngwook Kim,et al.  Human Detection and Activity Classification Based on Micro-Doppler Signatures Using Deep Convolutional Neural Networks , 2016, IEEE Geoscience and Remote Sensing Letters.

[6]  Sergey Ioffe,et al.  Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[7]  André Bourdoux,et al.  Indoor Person Identification Using a Low-Power FMCW Radar , 2018, IEEE Transactions on Geoscience and Remote Sensing.

[8]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[9]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Dinesh Babu Jayagopi,et al.  Human Activity Recognition Using Temporal Convolutional Network , 2018, iWOAR.

[11]  Avanti Shrikumar,et al.  Learning Important Features Through Propagating Activation Differences , 2017, ICML.

[12]  Branka Jokanovic,et al.  Radar fall motion detection using deep learning , 2016, 2016 IEEE Radar Conference (RadarConf).

[13]  Youngwook Kim,et al.  Hand Gesture Recognition Using Micro-Doppler Signatures With Convolutional Neural Network , 2016, IEEE Access.

[14]  Wesley De Neve,et al.  Perturbation Analysis of Gradient-based Adversarial Attacks , 2020, Pattern Recognit. Lett..

[15]  S. Z. Gürbüz,et al.  Deep convolutional autoencoder for radar-based classification of similar aided and unaided human activities , 2018, IEEE Transactions on Aerospace and Electronic Systems.

[16]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[17]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[18]  V. Chen,et al.  Radar Micro-Doppler signatures : processing and applications , 2014 .

[19]  Baptist Vandersmissen,et al.  Radar Signal Processing for Human Identification by Means of Reservoir Computing Networks , 2019, 2019 IEEE Radar Conference (RadarConf).

[20]  Samy Bengio,et al.  Adversarial examples in the physical world , 2016, ICLR.

[21]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[22]  Andrew Slavin Ross,et al.  Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients , 2017, AAAI.

[23]  Yu Zhao,et al.  Deep Residual Bidir-LSTM for Human Activity Recognition Using Wearable Sensors , 2017, Mathematical Problems in Engineering.

[24]  Seyed-Mohsen Moosavi-Dezfooli,et al.  Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[25]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[26]  Rob Fergus,et al.  Visualizing and Understanding Convolutional Networks , 2013, ECCV.

[27]  Tom Dhaene,et al.  Indoor human activity recognition using high-dimensional sensors and deep neural networks , 2019, Neural Computing and Applications.

[28]  Zhuowen Tu,et al.  Aggregated Residual Transformations for Deep Neural Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[29]  Xiangyu Zhang,et al.  Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples , 2018, NeurIPS.

[30]  Jinfeng Yi,et al.  AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks , 2018, AAAI.

[31]  McDanielPatrick,et al.  Making machine learning robust against adversarial inputs , 2018 .

[32]  Zachary Chase Lipton The mythos of model interpretability , 2016, ACM Queue.

[33]  David Wagner,et al.  Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.

[34]  Ivan Poupyrev,et al.  Interacting with Soli: Exploring Fine-Grained Dynamic Gesture Recognition in the Radio-Frequency Spectrum , 2016, UIST.

[35]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[36]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[37]  Abubakar Abid,et al.  Interpretation of Neural Networks is Fragile , 2017, AAAI.

[38]  Jun Zhu,et al.  Improving Black-box Adversarial Attacks with a Transfer-based Prior , 2019, NeurIPS.

[39]  Luisa Verdoliva,et al.  Perceptual Quality-preserving Black-Box Attack against Deep Learning Image Classifiers , 2019, Pattern Recognit. Lett..

[40]  Andrew Zisserman,et al.  Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.

[41]  Geoffrey E. Hinton,et al.  Speech recognition with deep recurrent neural networks , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[42]  Li Yao,et al.  DT-3DResNet-LSTM: An Architecture for Temporal Activity Recognition in Videos , 2018, TRECVID.

[43]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[44]  Qing Lei,et al.  A Comprehensive Survey of Vision-Based Human Action Recognition Methods , 2019, Sensors.

[45]  Martín Abadi,et al.  Adversarial Patch , 2017, ArXiv.

[46]  David A. Wagner,et al.  Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.

[47]  Logan Engstrom,et al.  Black-box Adversarial Attacks with Limited Queries and Information , 2018, ICML.

[48]  Ananthram Swami,et al.  The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).

[49]  Yutaka Satoh,et al.  Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet? , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[50]  Lu Sun,et al.  A survey of practical adversarial example attacks , 2018, Cybersecur..

[51]  Ramprasaath R. Selvaraju,et al.  Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization , 2016 .

[52]  Bolei Zhou,et al.  Learning Deep Features for Discriminative Localization , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[53]  Andrew Gordon Wilson,et al.  Simple Black-box Adversarial Attacks , 2019, ICML.

[54]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[55]  James Bailey,et al.  Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets , 2020, ICLR.

[56]  Christian Damsgaard Jensen,et al.  Video Surveillance: Privacy Issues and Legal Compliance , 2015 .