Abstract Neural Networks

Deep Neural Networks (DNNs) are rapidly being applied to safety-critical domains such as drone and airplane control, motivating techniques for verifying the safety of their behavior. Unfortunately, DNN verification is NP-hard, with current algorithms slowing exponentially with the number of nodes in the DNN. This paper introduces the notion of Abstract Neural Networks (ANNs), which can be used to soundly overapproximate DNNs while using fewer nodes. An ANN is like a DNN except weight matrices are replaced by values in a given abstract domain. We present a framework parameterized by the abstract domain and activation functions used in the DNN that can be used to construct a corresponding ANN. We present necessary and sufficient conditions on the DNN activation functions for the constructed ANN to soundly over-approximate the given DNN. Prior work on DNN abstraction was restricted to the interval domain and ReLU activation function. Our framework can be instantiated with other abstract domains such as octagons and polyhedra, as well as other activation functions such as Leaky ReLU, Sigmoid, and Hyperbolic Tangent.

[1]  Chenyi Hu,et al.  On interval weighted three-layer neural networks , 1998, Proceedings 31st Annual Simulation Symposium.

[2]  Roberto Giacobazzi,et al.  Making abstract interpretations complete , 2000, JACM.

[3]  Thomas W. Reps,et al.  A Method for Symbolic Computation of Abstract Operations , 2012, CAV.

[4]  Andrew L. Maas Rectifier Nonlinearities Improve Neural Network Acoustic Models , 2013 .

[5]  Greg Oman The Converse of the Intermediate Value Theorem: From Conway to Cantor to Cosets and Beyond , 2014 .

[6]  Thomas W. Reps,et al.  Symbolic Implementation of the Best Transformer , 2004, VMCAI.

[7]  Thomas W. Reps,et al.  Automating Abstract Interpretation , 2016, VMCAI.

[8]  Gogul Balakrishnan,et al.  Donut Domains: Efficient Non-convex Domains for Abstract Interpretation , 2012, VMCAI.

[9]  Mykel J. Kochenderfer,et al.  Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.

[10]  Mykel J. Kochenderfer,et al.  The Marabou Framework for Verification and Analysis of Deep Neural Networks , 2019, CAV.

[11]  Mark Chen,et al.  Language Models are Few-Shot Learners , 2020, NeurIPS.

[12]  Olivier Ponsini,et al.  Verifying floating-point programs with constraint programming and abstract interpretation techniques , 2014, Automated Software Engineering.

[13]  Antoine Miné,et al.  Tutorial on Static Inference of Numeric Invariants by Abstract Interpretation , 2018, Found. Trends Program. Lang..

[14]  Thomas W. Reps,et al.  Bilateral Algorithms for Symbolic Abstraction , 2012, SAS.

[15]  Song Han,et al.  Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.

[16]  Mykel J. Kochenderfer,et al.  Deep Neural Network Compression for Aircraft Collision Avoidance Systems , 2018, Journal of Guidance, Control, and Dynamics.

[17]  Patrick Cousot,et al.  A Sound Floating-Point Polyhedra Abstract Domain , 2008, APLAS.

[18]  Patrick Cousot,et al.  Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints , 1977, POPL.

[19]  Guy Katz,et al.  Simplifying Neural Networks with the Marabou Verification Engine , 2019, ArXiv.

[20]  Bertrand Jeannet,et al.  Apron: A Library of Numerical Abstract Domains for Static Analysis , 2009, CAV.

[21]  Matthew B. Dwyer,et al.  Refactoring Neural Networks for Verification , 2019, ArXiv.

[22]  Swarat Chaudhuri,et al.  AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation , 2018, 2018 IEEE Symposium on Security and Privacy (SP).

[23]  Antoine Miné,et al.  The octagon abstract domain , 2001, Proceedings Eighth Working Conference on Reverse Engineering.

[24]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[25]  Timon Gehr,et al.  An abstract domain for certifying neural networks , 2019, Proc. ACM Program. Lang..

[26]  Nicolas Halbwachs,et al.  Automatic discovery of linear restraints among variables of a program , 1978, POPL.

[27]  Natalia Gimelshein,et al.  PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.

[28]  Raquel E. Patiño-Escarcina,et al.  Interval Computing in Neural Networks: One Layer Interval Neural Networks , 2004, CIT.

[29]  Forrest N. Iandola,et al.  SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.

[30]  Roberto Bagnara,et al.  The Parma Polyhedra Library: Toward a complete set of numerical abstractions for the analysis and verification of hardware and software systems , 2006, Sci. Comput. Program..

[31]  Zahra Rahimi Afzal,et al.  Abstraction based Output Range Analysis for Neural Networks , 2020, NeurIPS.

[32]  Junfeng Yang,et al.  Efficient Formal Safety Analysis of Neural Networks , 2018, NeurIPS.

[33]  Marsha Chechik,et al.  Symbolic optimization with SMT solvers , 2014, POPL.

[34]  Yuan Xie,et al.  Model Compression and Hardware Acceleration for Neural Networks: A Comprehensive Survey , 2020, Proceedings of the IEEE.

[35]  Akira Fukuda,et al.  The modulo interval: a simple and practical representation for program analysis , 1999, 1999 International Conference on Parallel Architectures and Compilation Techniques (Cat. No.PR00425).

[36]  Z. A. Garczarczyk Interval neural networks , 2000, 2000 IEEE International Symposium on Circuits and Systems. Emerging Technologies for the 21st Century. Proceedings (IEEE Cat No.00CH36353).

[37]  Junfeng Yang,et al.  Formal Security Analysis of Neural Networks using Symbolic Intervals , 2018, USENIX Security Symposium.