When Single Event Upset Meets Deep Neural Networks: Observations, Explorations, and Remedies

Deep Neural Network has proved its potential in various perception tasks and hence become an appealing option for interpretation and data processing in security sensitive systems. However, security-sensitive systems demand not only high perception performance, but also design robustness under various circumstances. Unlike prior works that study network robustness from software level, we investigate from hardware perspective about the impact of Single Event Upset (SEU) induced parameter perturbation (SIPP) on neural networks. We systematically define the fault models of SEU and then provide the definition of sensitivity to SIPP as the robustness measure for the network. We are then able to analytically explore the weakness of a network and summarize the key findings for the impact of SIPP on different types of bits in a floating point parameter, layer-wise robustness within the same network and impact of network depth. Based on those findings, we propose two remedy solutions to protect DNNs from SIPPs, which can mitigate accuracy degradation from 28% to 0.27% for ResNet with merely 0.24-bit SRAM area overhead per parameter.

[1]  Yong Li,et al.  Learning Phase Competition for Traffic Signal Control , 2019, CIKM.

[2]  Jing Liao,et al.  Neural Color Transfer between Images , 2017, ArXiv.

[3]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[4]  Lei Yang,et al.  Accuracy vs. Efficiency: Achieving Both through FPGA-Implementation Aware Neural Architecture Search , 2019, 2019 56th ACM/IEEE Design Automation Conference (DAC).

[5]  Zhiguo Shi,et al.  Noise-Aware DVFS for Efficient Transitions on Battery-Powered IoT Devices , 2020, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[6]  Yiyu Shi,et al.  Achieving Super-Linear Speedup across Multi-FPGA for Real-Time DNN Inference , 2019, ACM Trans. Embed. Comput. Syst..

[7]  Yiyu Shi,et al.  A Multi-Level-Optimization Framework for FPGA-Based Cellular Neural Network Implementation , 2018, ACM J. Emerg. Technol. Comput. Syst..

[8]  Jason Cong,et al.  Optimizing FPGA-based Accelerator Design for Deep Convolutional Neural Networks , 2015, FPGA.

[9]  Ming-Hsuan Yang,et al.  Hierarchical Convolutional Features for Visual Tracking , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[10]  HuJingtong,et al.  Achieving Super-Linear Speedup across Multi-FPGA for Real-Time DNN Inference , 2019 .

[11]  Yiyu Shi,et al.  From Layout to System: Early Stage Power Delivery and Architecture Co-Exploration , 2019, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[12]  Robert E. Lyons,et al.  The Use of Triple-Modular Redundancy to Improve Computer Reliability , 1962, IBM J. Res. Dev..

[13]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  J. Ziegler,et al.  Effect of Cosmic Rays on Computer Memories , 1979, Science.

[15]  Masanori Hashimoto,et al.  Multiple sensitive volume based soft error rate estimation with machine learning , 2016, 2016 16th European Conference on Radiation and Its Effects on Components and Systems (RADECS).

[16]  IV RobertG.Pettit,et al.  Detecting Single Event Upsets in Embedded Software , 2018, 2018 IEEE 21st International Symposium on Real-Time Distributed Computing (ISORC).

[17]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[18]  Gabriel Kreiman,et al.  On the Robustness of Convolutional Neural Networks to Internal Architecture and Weight Perturbations , 2017, ArXiv.

[19]  Qiang Chen,et al.  Network In Network , 2013, ICLR.

[20]  Chengyi Zhang,et al.  Intrinsic Image Transformation via Scale Space Decomposition , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[21]  Gu-Yeon Wei,et al.  On-Chip Deep Neural Network Storage with Multi-Level eNVM , 2018, 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC).

[22]  Bernard Widrow,et al.  Sensitivity of feedforward neural networks to weight errors , 1990, IEEE Trans. Neural Networks.

[23]  Alessandro Barenghi,et al.  Fault Injection Attacks on Cryptographic Devices: Theory, Practice, and Countermeasures , 2012, Proceedings of the IEEE.

[24]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[25]  James Demmel,et al.  IEEE Standard for Floating-Point Arithmetic , 2008 .

[26]  Gu-Yeon Wei,et al.  Ares: A framework for quantifying the resilience of deep neural networks , 2018, 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC).

[27]  Edwin Hsing-Mean Sha,et al.  On the Design of Reliable Heterogeneous Systems via Checkpoint Placement and Core Assignment , 2018, ACM Great Lakes Symposium on VLSI.

[28]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.