A Practical Introduction to Side-Channel Extraction of Deep Neural Network Parameters

Model extraction is a major threat for embedded deep neural network models that leverages an extended attack surface. Indeed, by physically accessing a device, an adversary may exploit side-channel leakages to extract critical information of a model (i.e., its architecture or internal parameters). Different adversarial objectives are possible including a fidelity-based scenario where the architecture and parameters are precisely extracted (model cloning). We focus this work on software implementation of deep neural networks embedded in a high-end 32-bit microcontroller (Cortex-M7) and expose several challenges related to fidelity-based parameters extraction through side-channel analysis, from the basic multiplication operation to the feed-forward connection through the layers. To precisely extract the value of parameters represented in the single-precision floating point IEEE-754 standard, we propose an iterative process that is evaluated with both simulations and traces from a Cortex-M7 target. To our knowledge, this work is the first to target such an high-end 32-bit platform. Importantly, we raise and discuss the remaining challenges for the complete extraction of a deep neural network model, more particularly the critical case of biases.

[1]  Rubén Salvador,et al.  Physical Side-Channel Attacks on Embedded Neural Networks: A Survey , 2021, Applied Sciences.

[2]  Jean-Baptiste Rigaud,et al.  A Review of Confidentiality Threats Against Embedded Neural Network Models , 2021, 2021 IEEE 7th World Forum on Internet of Things (WF-IoT).

[3]  Jean-Max Dutertre,et al.  An Overview of Laser Injection against Embedded Neural Network Models , 2021, 2021 IEEE 7th World Forum on Internet of Things (WF-IoT).

[4]  Linda Guiga,et al.  Side channel attacks for architecture extraction of neural networks , 2021, CAAI Trans. Intell. Technol..

[5]  Utsav Banerjee,et al.  Leaky Nets: Recovering Embedded Neural Network Models and Inputs Through Simple Power and Timing Side-Channels—Attacks and Defenses , 2021, IEEE Internet of Things Journal.

[6]  Haocheng Ma,et al.  DeepEM: Deep Neural Networks Model Recovery through EM Side-Channel Information Leakage , 2020, 2020 IEEE International Symposium on Hardware Oriented Security and Trust (HOST).

[7]  Thomas Wahl,et al.  Reverse-Engineering Deep Neural Networks Using Floating-Point Timing Side-Channels , 2020, 2020 57th ACM/IEEE Design Automation Conference (DAC).

[8]  Ilya Mironov,et al.  Cryptanalytic Extraction of Neural Network Models , 2020, CRYPTO.

[9]  Yang Liu,et al.  SNIFF: Reverse Engineering of Neural Networks With Fault Attacks , 2020, IEEE Transactions on Reliability.

[10]  Aydin Aysu,et al.  MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection , 2019, 2020 IEEE International Symposium on Hardware Oriented Security and Trust (HOST).

[11]  Nicolas Papernot,et al.  High Accuracy and High Fidelity Extraction of Neural Networks , 2019, USENIX Security Symposium.

[12]  Qi Xuan,et al.  Open DNN Box by Power Side-Channel Attack , 2019, IEEE Transactions on Circuits and Systems II: Express Briefs.

[13]  Zhiru Zhang,et al.  Reverse Engineering Convolutional Neural Networks Through Side-channel Information Leaks , 2018, 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC).

[14]  Seong Joon Oh,et al.  Towards Reverse-Engineering Black-Box Neural Networks , 2017, ICLR.

[15]  Lejla Batina,et al.  CSI NN: Reverse Engineering of Neural Network Architectures Through Electromagnetic Side Channel , 2019, USENIX Security Symposium.