Adversarial Attack: A New Threat to Smart Devices and How to Defend It

This article introduces adversarial attack, a recently-unveiled security threat to consumer electronics, especially those utilizing machine learning techniques. We start with the fundamental knowledge including what are adversarial examples, how to realize such attacks, and common defense methods. Adversarial training enhances models’ resilience to adversarial attacks by taking both regular and adversarial examples for training. However, applying adversarial examples under a single adversarial strength provide defense in a very limited effective range. We propose a multiple-strength adversarial training method. A random walk algorithm is adopted to optimize the selection of adversarial strengths, which is closely related to the design cost and training time. We also analyze the hardware cost and quantization loss to guide future consumer electronics designs.

[1]  Atul Prakash,et al.  Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[2]  Kevin Barraclough,et al.  I and i , 2001, BMJ : British Medical Journal.

[3]  Olawumi Olayemi,et al.  Security issues in smart homes and mobile health system: threat analysis, possible countermeasures and lessons learned , 2017 .

[4]  Patrick D. McDaniel,et al.  Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.

[5]  Yiran Chen,et al.  MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks , 2018, 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI).

[6]  Brian Markwalter The Path to Driverless Cars [CTA Insights] , 2017, IEEE Consumer Electronics Magazine.

[7]  Peter Corcoran,et al.  Deep Learning for Consumer Devices and Services: Pushing the limits for machine learning, artificial intelligence, and computer vision. , 2017, IEEE Consumer Electronics Magazine.

[8]  Ananthram Swami,et al.  Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).

[9]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[10]  David A. Wagner,et al.  Audio Adversarial Examples: Targeted Attacks on Speech-to-Text , 2018, 2018 IEEE Security and Privacy Workshops (SPW).

[11]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[12]  Christoph P. Mayer Electronic Communications of the EASST Volume 17 ( 2009 ) Workshops der Wissenschaftlichen Konferenz Kommunikation in Verteilten Systemen 2009 ( WowKiVS 2009 ) Security and Privacy Challenges in the Internet of Things , 2008 .

[13]  Samy Bengio,et al.  Adversarial Machine Learning at Scale , 2016, ICLR.

[14]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[15]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.