An Overview of Laser Injection against Embedded Neural Network Models

For many IoT domains, Machine Learning and more particularly Deep Learning brings very efficient solutions to handle complex data and perform challenging and mostly critical tasks. However, the deployment of models in a large variety of devices faces several obstacles related to trust and security. The latest is particularly critical since the demonstrations of severe flaws impacting the integrity, confidentiality and accessibility of neural network models. However, the attack surface of such embedded systems cannot be reduced to abstract flaws but must encompass the physical threats related to the implementation of these models within hardware platforms (e.g., 32-bit microcontrollers). Among physical attacks, Fault Injection Analysis (FIA) are known to be very powerful with a large spectrum of attack vectors. Most importantly, highly focused FIA techniques such as laser beam injection enable very accurate evaluation of the vulnerabilities as well as the robustness of embedded systems. Here, we propose to discuss how laser injection with state-of-the-art equipment, combined with theoretical evidences from Adversarial Machine Learning, highlights worrying threats against the integrity of deep learning inference and claims that join efforts from the theoretical AI and Physical Security communities are a urgent need.

[1]  Cho-Jui Hsieh,et al.  Sign-OPT: A Query-Efficient Hard-label Adversarial Attack , 2020, ICLR.

[2]  Gu-Yeon Wei,et al.  Ares: A framework for quantifying the resilience of deep neural networks , 2018, 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC).

[3]  Zitao Chen,et al.  TensorFI: A Flexible Fault Injection Framework for TensorFlow Applications , 2020, 2020 IEEE 31st International Symposium on Software Reliability Engineering (ISSRE).

[4]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[5]  Lei Ma,et al.  Security Evaluation of Deep Neural Network Resistance Against Laser Fault Injection , 2020, 2020 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA).

[6]  Jean-Max Dutertre,et al.  Fault Model Analysis of Laser-Induced Faults in SRAM Memory Cells , 2013, 2013 Workshop on Fault Diagnosis and Tolerance in Cryptography.

[7]  Jean-Max Dutertre,et al.  Laser fault injection into SRAM cells: Picosecond versus nanosecond pulses , 2015, 2015 IEEE 21st International On-Line Testing Symposium (IOLTS).

[8]  Matthias Hein,et al.  Sparse and Imperceivable Adversarial Attacks , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[9]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[10]  Fan Yao,et al.  T-BFA: Targeted Bit-Flip Adversarial Weight Attack , 2020, ArXiv.

[11]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[12]  Seyed-Mohsen Moosavi-Dezfooli,et al.  SparseFool: A Few Pixels Make a Big Difference , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[13]  Kouichi Sakurai,et al.  One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.

[14]  Deliang Fan,et al.  Bit-Flip Attack: Crushing Neural Network With Progressive Bit Search , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[15]  Stefan Mangard,et al.  Power analysis attacks - revealing the secrets of smart cards , 2007 .

[16]  Jean-Luc Danger,et al.  Laser-induced Single-bit Faults in Flash Memory: Instructions Corruption on a 32-bit Microcontroller , 2019, 2019 IEEE International Symposium on Hardware Oriented Security and Trust (HOST).

[17]  Lejla Batina,et al.  CSI NN: Reverse Engineering of Neural Network Architectures Through Electromagnetic Side Channel , 2019, USENIX Security Symposium.

[18]  Fernanda Gusmão de Lima Kastensmidt,et al.  Comparative Analysis of Inference Errors in a Neural Network Implemented in SRAM-Based FPGA Induced by Neutron Irradiation and Fault Injection Methods , 2018, 2018 31st Symposium on Integrated Circuits and Systems Design (SBCCI).

[19]  Ananthram Swami,et al.  The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).

[20]  Alessandro Barenghi,et al.  Fault Injection Attacks on Cryptographic Devices: Theory, Practice, and Countermeasures , 2012, Proceedings of the IEEE.

[21]  Qi Xuan,et al.  Open DNN Box by Power Side-Channel Attack , 2019, IEEE Transactions on Circuits and Systems II: Express Briefs.

[22]  Jean-Luc Danger,et al.  Single-bit Laser Fault Model in NOR Flash Memories: Analysis and Exploitation , 2020, 2020 Workshop on Fault Detection and Tolerance in Cryptography (FDTC).

[23]  Qiang Xu,et al.  Fault injection attack on deep neural network , 2017, 2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD).

[24]  Aleksander Madry,et al.  Adversarial Examples Are Not Bugs, They Are Features , 2019, NeurIPS.