FANNet: Formal Analysis of Noise Tolerance, Training Bias and Input Sensitivity in Neural Networks

With a constant improvement in the network architectures and training methodologies, Neural Networks (NNs) are increasingly being deployed in real-world Machine Learning systems. However, despite their impressive performance on "known inputs", these NNs can fail absurdly on the "unseen inputs", especially if these real-time inputs deviate from the training dataset distributions, or contain certain types of input noise. This indicates the low noise tolerance of NNs, which is a major reason for the recent increase of adversarial attacks. This is a serious concern, particularly for safety-critical applications, where inaccurate results lead to dire consequences. We propose a novel methodology that leverages model checking for the Formal Analysis of Neural Network (FANNet) under different input noise ranges. Our methodology allows us to rigorously analyze the noise tolerance of NNs, their input node sensitivity, and the effects of training bias on their performance, e.g., in terms of classification accuracy. For evaluation, we use a feed-forward fully-connected NN architecture trained for the Leukemia classification. Our experimental results show ±11% noise tolerance for the given trained network, identify the most sensitive input nodes, and confirm the biasness of the available training dataset.

[1]  Nisheeth K. Vishnoi,et al.  Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees , 2018, FAT.

[2]  Andre Esteva,et al.  A guide to deep learning in healthcare , 2019, Nature Medicine.

[3]  Kilian Q. Weinberger,et al.  Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Junfeng Yang,et al.  Efficient Formal Safety Analysis of Neural Networks , 2018, NeurIPS.

[5]  Kristina Lerman,et al.  A Survey on Bias and Fairness in Machine Learning , 2019, ACM Comput. Surv..

[6]  Min Wu,et al.  Safety Verification of Deep Neural Networks , 2016, CAV.

[7]  Ashish Tiwari,et al.  Output Range Analysis for Deep Feedforward Neural Networks , 2018, NFM.

[8]  David Lehký,et al.  Neural network ensemble-based parameter sensitivity analysis in civil engineering systems , 2017, Neural Computing and Applications.

[9]  Benjamin Fish,et al.  A Confidence-Based Approach for Balancing Fairness and Accuracy , 2016, SDM.

[10]  Wenjia Wang Quantifying Relevance of Input Features , 2002, IDEAL.

[11]  Muhammad Moinuddin,et al.  A Novel Fractional Gradient-Based Learning Algorithm for Recurrent Neural Networks , 2018, Circuits Syst. Signal Process..

[12]  Chung-Hao Huang,et al.  Verification of Binarized Neural Networks via Inter-neuron Factoring - (Short Paper) , 2017, VSTTE.

[13]  Leonid Ryzhyk,et al.  Verifying Properties of Binarized Deep Neural Networks , 2017, AAAI.

[14]  Rüdiger Ehlers,et al.  Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks , 2017, ATVA.

[15]  Mykel J. Kochenderfer,et al.  Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.

[16]  Gary Anthes,et al.  Lifelong learning in artificial neural networks , 2019, Commun. ACM.

[17]  Alessio Lomuscio,et al.  Formal Verification of CNN-based Perception Systems , 2018, ArXiv.

[18]  Armando Solar-Lezama,et al.  Verifying Fairness Properties via Concentration , 2018, ArXiv.

[19]  Jacek M. Zurada,et al.  Sensitivity analysis for minimization of input data dimension for feedforward neural network , 1994, Proceedings of IEEE International Symposium on Circuits and Systems - ISCAS '94.

[20]  Russ Tedrake,et al.  Evaluating Robustness of Neural Networks with Mixed Integer Programming , 2017, ICLR.

[21]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[22]  Andrew Hunter,et al.  Application of neural networks and sensitivity analysis to improved prediction of trauma survival , 2000, Comput. Methods Programs Biomed..

[23]  Ying Liu,et al.  Deep Learning-Based Multi-scale Multi-object Detection and Classification for Autonomous Driving , 2019, Proceedings.