Verifying Properties of Binarized Deep Neural Networks

Understanding properties of deep neural networks is an important challenge in deep learning. In this paper, we take a step in this direction by proposing a rigorous way of verifying properties of a popular class of neural networks, Binarized Neural Networks, using the well-developed means of Boolean satisfiability. Our main contribution is a construction that creates a representation of a binarized neural network as a Boolean formula. Our encoding is the first exact Boolean representation of a deep neural network. Using this encoding, we leverage the power of modern SAT solvers along with a proposed counterexample-guided search procedure to verify various properties of these networks. A particular focus will be on the critical property of robustness to adversarial perturbations. For this property, our experimental results demonstrate that our approach scales to medium-size deep neural networks used in image classification tasks. To the best of our knowledge, this is the first work on verifying properties of deep neural networks using an exact Boolean encoding of the network.

[1]  Gilles Audemard,et al.  Predicting Learnt Clauses Quality in Modern SAT Solvers , 2009, IJCAI.

[2]  Mykel J. Kochenderfer,et al.  Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.

[3]  Gu-Yeon Wei,et al.  Deep Learning for Computer Architects , 2017, Synthesis Lectures on Computer Architecture.

[4]  Pavel Pudlák,et al.  Lower bounds for resolution and cutting plane proofs and monotone computations , 1997, Journal of Symbolic Logic.

[5]  Helmut Veith,et al.  Counterexample-guided abstraction refinement for symbolic model checking , 2003, JACM.

[6]  P ? ? ? ? ? ? ? % ? ? ? ? , 1991 .

[7]  Kenneth L. McMillan,et al.  Interpolation and SAT-Based Model Checking , 2003, CAV.

[8]  Patrick D. McDaniel,et al.  Adversarial Perturbations Against Deep Neural Networks for Malware Classification , 2016, ArXiv.

[9]  Antonio Criminisi,et al.  Measuring Neural Net Robustness with Constraints , 2016, NIPS.

[10]  Chih-Hong Cheng,et al.  Maximum Resilience of Artificial Neural Networks , 2017, ATVA.

[11]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[12]  Edmund M. Clarke,et al.  Counterexample-guided abstraction refinement , 2003, 10th International Symposium on Temporal Representation and Reasoning, 2003 and Fourth International Conference on Temporal Logic. Proceedings..

[13]  Saibal Mukhopadhyay,et al.  Efficient Object Detection Using Embedded Binarized Neural Networks , 2018, J. Signal Process. Syst..

[14]  Ran El-Yaniv,et al.  Binarized Neural Networks , 2016, ArXiv.

[15]  H. T. Kung,et al.  Embedded Binarized Neural Networks , 2017, EWSN.

[16]  S. Nash,et al.  Linear and Nonlinear Optimization , 2008 .

[17]  Luca Pulina,et al.  Challenging SMT solvers to verify neural networks , 2012, AI Commun..

[18]  Carsten Sinz,et al.  Towards an Optimal CNF Encoding of Boolean Cardinality Constraints , 2005, CP.

[19]  Luca Pulina,et al.  An Abstraction-Refinement Approach to Verification of Artificial Neural Networks , 2010, CAV.

[20]  Kenneth L. McMillan,et al.  Applications of Craig Interpolants in Model Checking , 2005, TACAS.

[22]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[23]  Min Wu,et al.  Safety Verification of Deep Neural Networks , 2016, CAV.

[24]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[25]  Bolei Zhou,et al.  Network Dissection: Quantifying Interpretability of Deep Visual Representations , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[26]  Tsuyoshi Murata,et al.  {m , 1934, ACML.

[27]  Thomas Brox,et al.  Striving for Simplicity: The All Convolutional Net , 2014, ICLR.

[28]  Sergey Ioffe,et al.  Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.

[29]  Andrew Zisserman,et al.  Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.

[30]  Benjamin Müller,et al.  The SCIP Optimization Suite 5.0 , 2017, 2112.08872.

[31]  Mutsunori Banbara,et al.  Compiling finite linear CSP into SAT , 2006, Constraints.

[32]  Seyed-Mohsen Moosavi-Dezfooli,et al.  Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[33]  Ananthram Swami,et al.  The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).

[34]  Chih-Hong Cheng,et al.  Verification of Binarized Neural Networks , 2017, ArXiv.

[35]  John B. Shoven,et al.  I , Edinburgh Medical and Surgical Journal.

[36]  Seyed-Mohsen Moosavi-Dezfooli,et al.  DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).