Robustness of Neural Networks to Parameter Quantization

Quantization, a commonly used technique to reduce the memory footprint of a neural network for edge computing, entails reducing the precision of the floating-point representation used for the parameters of the network. The impact of such rounding-off errors on the overall performance of the neural network is estimated using testing, which is not exhaustive and thus cannot be used to guarantee the safety of the model. We present a framework based on Satisfiability Modulo Theory (SMT) solvers to quantify the robustness of neural networks to parameter perturbation. To this end, we introduce notions of local and global robustness that capture the deviation in the confidence of class assignments due to parameter quantization. The robustness notions are then cast as instances of SMT problems and solved automatically using solvers, such as dReal. We demonstrate our framework on two simple Multi-Layer Perceptrons (MLP) that perform binary classification on a two-dimensional input. In addition to quantifying the robustness, we also show that Rectified Linear Unit activation results in higher robustness than linear activations for our MLPs.

[1]  Fei-Fei Li,et al.  ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[2]  John Tran,et al.  cuDNN: Efficient Primitives for Deep Learning , 2014, ArXiv.

[3]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[4]  Roberto Sebastiani,et al.  OptiMathSAT: A Tool for Optimization Modulo Theories , 2018, Journal of Automated Reasoning.

[5]  Edmund M. Clarke,et al.  dReal: An SMT Solver for Nonlinear Theories over the Reals , 2013, CADE.

[6]  Seyed-Mohsen Moosavi-Dezfooli,et al.  The Robustness of Deep Networks: A Geometrical Perspective , 2017, IEEE Signal Processing Magazine.

[7]  Edmund M. Clarke,et al.  δ-Complete Decision Procedures for Satisfiability over the Reals , 2012, IJCAR.

[8]  Thomas A. Henzinger,et al.  Hybrid Automata: An Algorithmic Approach to the Specification and Verification of Hybrid Systems , 1992, Hybrid Systems.

[9]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[10]  Sijia Liu,et al.  CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks , 2018, AAAI.

[11]  Abhishek Murthy,et al.  Object Recognition under Lighting Variations using Pre-Trained Networks , 2018, 2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR).

[12]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[13]  Ananthram Swami,et al.  Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).

[14]  Yunhui Guo,et al.  A Survey on Methods and Theories of Quantized Neural Networks , 2018, ArXiv.

[15]  Timo Aila,et al.  Pruning Convolutional Neural Networks for Resource Efficient Transfer Learning , 2016, ArXiv.

[16]  Forrest N. Iandola,et al.  SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.

[17]  Mykel J. Kochenderfer,et al.  Towards Proving the Adversarial Robustness of Deep Neural Networks , 2017, FVAV@iFM.

[18]  Andrew Zisserman,et al.  Spatial Transformer Networks , 2015, NIPS.

[19]  David A. Forsyth,et al.  SafetyNet: Detecting and Rejecting Adversarial Examples Robustly , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[20]  Min Wu,et al.  Safety Verification of Deep Neural Networks , 2016, CAV.

[21]  Nikolaj Bjørner,et al.  νZ - An Optimizing SMT Solver , 2015, TACAS.

[22]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[23]  Antonio Criminisi,et al.  Measuring Neural Net Robustness with Constraints , 2016, NIPS.

[24]  Daniel Kroening,et al.  A Survey of Automated Techniques for Formal Software Verification , 2008, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[25]  Jan Hendrik Metzen,et al.  On Detecting Adversarial Perturbations , 2017, ICLR.

[26]  Xiaoyu Cao,et al.  Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification , 2017, ACSAC.

[27]  Cho-Jui Hsieh,et al.  Efficient Neural Network Robustness Certification with General Activation Functions , 2018, NeurIPS.

[28]  Geoffrey E. Hinton,et al.  Distilling the Knowledge in a Neural Network , 2015, ArXiv.