Evaluation of Parameter-based Attacks against Embedded Neural Networks with Laser Injection

Upcoming certification actions related to the security of machine learning (ML) based systems raise major evaluation challenges that are amplified by the large-scale deployment of models in many hardware platforms. Until recently, most of research works focused on API-based attacks that consider a ML model as a pure algorithmic abstraction. However, new implementation-based threats have been revealed, emphasizing the urgency to propose both practical and simulation-based methods to properly evaluate the robustness of models. A major concern is parameter-based attacks (such as the Bit-Flip Attack, BFA) that highlight the lack of robustness of typical deep neural network models when confronted by accurate and optimal alterations of their internal parameters stored in memory. Setting in a security testing purpose, this work practically reports, for the first time, a successful variant of the BFA on a 32-bit Cortex-M microcontroller using laser fault injection. It is a standard fault injection means for security evaluation, that enables to inject spatially and temporally accurate faults. To avoid unrealistic brute-force strategies, we show how simulations help selecting the most sensitive set of bits from the parameters taking into account the laser fault model.

[1]  Jun Yang,et al.  Generating Robust DNN With Resistance to Bit-Flip Based Adversarial Weight Attack , 2023, IEEE Transactions on Computers.

[2]  B. Schiele,et al.  Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators , 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[3]  Pierre-Alain Moëllic,et al.  A Practical Introduction to Side-Channel Extraction of Deep Neural Network Parameters , 2022, Smart Card Research and Advanced Application Conference.

[4]  Pierre-Alain Moëllic,et al.  A Closer Look at Evaluating the Bit-Flip Attack Against Deep Neural Networks , 2022, 2022 IEEE 28th International Symposium on On-Line Testing and Robust System Design (IOLTS).

[5]  Jingkuan Song,et al.  Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Deliang Fan,et al.  DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories , 2021, 2022 IEEE Symposium on Security and Privacy (SP).

[7]  Florian Tramèr Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them , 2021, ICML.

[8]  C. Chakrabarti,et al.  T-BFA: Targeted Bit-Flip Adversarial Weight Attack , 2020, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  Farinaz Koushanfar,et al.  HASHTAG: Hash Signatures for Online Detection of Fault-Injection Attacks on Deep Neural Networks , 2021, 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD).

[10]  Lei Ma,et al.  Security Evaluation of Deep Neural Network Resistance Against Laser Fault Injection , 2020, 2020 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA).

[11]  Chaitali Chakrabarti,et al.  Defending and Harnessing the Bit-Flip Based Adversarial Weight Attack , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[12]  Fan Yao,et al.  DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips , 2020, USENIX Security Symposium.

[13]  Aaas News,et al.  Book Reviews , 1893, Buffalo Medical and Surgical Journal.

[14]  Jean-Luc Danger,et al.  Laser-induced Single-bit Faults in Flash Memory: Instructions Corruption on a 32-bit Microcontroller , 2019, 2019 IEEE International Symposium on Hardware Oriented Security and Trust (HOST).

[15]  Deliang Fan,et al.  Bit-Flip Attack: Crushing Neural Network With Progressive Bit Search , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[16]  Aleksander Madry,et al.  On Evaluating Adversarial Robustness , 2019, ArXiv.

[17]  R. Sarpong,et al.  Bio-inspired synthesis of xishacorenes A, B, and C, and a new congener from fuscol† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c9sc02572c , 2019, Chemical science.

[18]  Michael P. Wellman,et al.  SoK: Security and Privacy in Machine Learning , 2018, 2018 IEEE European Symposium on Security and Privacy (EuroS&P).

[19]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[20]  Alessandro Barenghi,et al.  Fault Injection Attacks on Cryptographic Devices: Theory, Practice, and Countermeasures , 2012, Proceedings of the IEEE.

[21]  W. Marsden I and J , 2012 .

[22]  Stefan Mangard,et al.  Power analysis attacks - revealing the secrets of smart cards , 2007 .