DeepLaser: Practical Fault Attack on Deep Neural Networks.

As deep learning systems are widely adopted in safety- and security-critical applications, such as autonomous vehicles, banking systems, etc., malicious faults and attacks become a tremendous concern, which potentially could lead to catastrophic consequences. In this paper, we initiate the first study of leveraging physical fault injection attacks on Deep Neural Networks (DNNs), by using laser injection technique on embedded systems. In particular, our exploratory study targets four widely used activation functions in DNNs development, that are the general main building block of DNNs that creates non-linear behaviors -- ReLu, softmax, sigmoid, and tanh. Our results show that by targeting these functions, it is possible to achieve a misclassification by injecting faults into the hidden layer of the network. Such result can have practical implications for real-world applications, where faults can be introduced by simpler means (such as altering the supply voltage).

[1]  Dawn Xiaodong Song,et al.  Decision Boundary Analysis of Adversarial Examples , 2018, ICLR.

[2]  Eli Biham,et al.  Differential Fault Analysis of Secret Key Cryptosystems , 1997, CRYPTO.

[3]  Minjae Lee,et al.  Fault tolerance analysis of digital feed-forward deep neural networks , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[4]  Qiang Xu,et al.  Fault injection attack on deep neural network , 2017, 2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD).

[5]  Jakub Breier,et al.  Feeding Two Cats with One Bowl: On Designing a Fault and Side-Channel Resistant Software Encoding Scheme , 2016, CT-RSA.

[6]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[7]  Yuan Yu,et al.  TensorFlow: A system for large-scale machine learning , 2016, OSDI.

[8]  M. Joye,et al.  Practical Fault Countermeasures for Chinese Remaindering Based RSA ( Extended Abstract ) , 2005 .

[9]  Ingrid Verbauwhede,et al.  An In-depth and Black-box Characterization of the Effects of Clock Glitches on 8-bit MCUs , 2011, 2011 Workshop on Fault Diagnosis and Tolerance in Cryptography.

[10]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[11]  Karine Heydemann,et al.  Electromagnetic Fault Injection: Towards a Fault Model on a 32-bit Microcontroller , 2013, 2013 Workshop on Fault Diagnosis and Tolerance in Cryptography.

[12]  Mingyan Liu,et al.  Spatially Transformed Adversarial Examples , 2018, ICLR.

[13]  Marc Joye,et al.  Fault Analysis in Cryptography , 2012, Information Security and Cryptography.

[14]  Benjamin Grégoire,et al.  Synthesis of Fault Attacks on Cryptographic Implementations , 2014, IACR Cryptol. ePrint Arch..

[15]  Huichen Lihuichen DECISION-BASED ADVERSARIAL ATTACKS: RELIABLE ATTACKS AGAINST BLACK-BOX MACHINE LEARNING MODELS , 2017 .

[16]  David Naccache,et al.  The Sorcerer's Apprentice Guide to Fault Attacks , 2006, Proceedings of the IEEE.

[17]  Jean-Jacques Quisquater,et al.  Faults, Injection Methods, and Fault Attacks , 2007, IEEE Design & Test of Computers.

[18]  Richard J. Lipton,et al.  On the Importance of Checking Cryptographic Protocols for Faults (Extended Abstract) , 1997, EUROCRYPT.

[19]  Dirmanto Jap,et al.  Laser Profiling for the Back-Side Fault Attacks: With a Practical Laser Skip Instruction Attack on AES , 2015, CPSS@ASIACSS.

[20]  Yann LeCun,et al.  The mnist database of handwritten digits , 2005 .

[21]  David Naccache,et al.  How to flip a bit? , 2010, 2010 IEEE 16th International On-Line Testing Symposium.

[22]  Lejla Batina,et al.  CSI Neural Network: Using Side-channels to Recover Your Artificial Neural Network Information , 2018, IACR Cryptol. ePrint Arch..

[23]  Julien Bringer,et al.  Study of a Novel Software Constant Weight Implementation , 2014, CARDIS.

[24]  David Wagner,et al.  Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.

[25]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[26]  Christophe Giraud,et al.  A Survey on Fault Attacks , 2004, CARDIS.