Improving fault tolerance of DNNs through weight remapping based on gaussian distribution: work-in-progress

In this paper, we approach to improve the fault tolerance of Deep Neural Networks (DNNs) for safety-critical artificial intelligent applications. We propose to remap the range of 32-bit float to weights to reduce the influence of invalid weights caused by bit-flip faults. From preliminary experiments, we observe that weakening bit-flip faults which make positive weights larger can help to improve the reliability of DNNs. Then, we propose a gaussian distribution based mapping method to prevent weights from being influenced by bit-flip faults, in which a novel function is formulated to remap the relation between 32-bit float and the values of weights. Extensive experiments demonstrate that our approach can improve the accuracy of VGG16 from 13.5% to 80.5%, which is better than the other six tolerance approaches of DNNs.

[1]  Alexander Binder,et al.  On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.

[2]  Kaushik Roy,et al.  AxNN: Energy-efficient neuromorphic systems using approximate computing , 2014, 2014 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED).