Towards Verification-Aware Knowledge Distillation for Neural-Network Controlled Systems: Invited Paper

Neural networks are widely used in many applications ranging from classification to control. While these networks are composed of simple arithmetic operations, they are challenging to formally verify for properties such as reachability due to the presence of nonlinear activation functions. In this paper, we make the observation that Lipschitz continuity of a neural network not only can play a major role in the construction of reachable sets for neural-network controlled systems but also can be systematically controlled during training of the neural network. We build on this observation to develop a novel verification-aware knowledge distillation framework that transfers the knowledge of a trained network to a new and easier-to-verify network. Experimental results show that our method can substantially improve reachability analysis of neural-network controlled systems for several state-of-the-art tools.

[1]  James Kapinski,et al.  INVITED: Reasoning about Safety of Learning-Enabled Components in Autonomous Cyber-physical Systems , 2018, 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC).

[2]  Min Wu,et al.  Safety Verification of Deep Neural Networks , 2016, CAV.

[3]  Jyotirmoy V. Deshmukh,et al.  Reasoning about Safety of Learning-Enabled Components in Autonomous Cyber-physical Systems , 2018 .

[4]  Matthew Richardson,et al.  Blending LSTMs into CNNs , 2015, ICLR 2016.

[5]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Mykel J. Kochenderfer,et al.  Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.

[7]  Larry S. Davis,et al.  Visual Relationship Detection with Internal and External Linguistic Knowledge Distillation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[8]  Rich Caruana,et al.  Model compression , 2006, KDD '06.

[9]  Jude W. Shavlik,et al.  in Advances in Neural Information Processing , 1996 .

[10]  Xiaowei Huang,et al.  Reachability Analysis of Deep Neural Networks with Provable Guarantees , 2018, IJCAI.

[11]  Insup Lee,et al.  Verisig: verifying safety properties of hybrid systems with neural network controllers , 2018, HSCC.

[12]  Weiming Xiang,et al.  Reachability Analysis and Safety Verification for Neural Network Control Systems , 2018, ArXiv.

[13]  Jiameng Fan,et al.  ReachNN , 2019, ACM Trans. Embed. Comput. Syst..

[14]  Zhiyuan Tang,et al.  Recurrent neural network training with dark knowledge transfer , 2015, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[15]  Geoffrey E. Hinton,et al.  Distilling the Knowledge in a Neural Network , 2015, ArXiv.

[16]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[17]  Javad Lavaei,et al.  Stability-Certified Reinforcement Learning: A Control-Theoretic Perspective , 2018, IEEE Access.

[18]  Xin Chen,et al.  Flow*: An Analyzer for Non-linear Hybrid Systems , 2013, CAV.

[19]  Eric P. Xing,et al.  Harnessing Deep Neural Networks with Logic Rules , 2016, ACL.

[20]  Rich Caruana,et al.  Do Deep Nets Really Need to be Deep? , 2013, NIPS.

[21]  Mihai Surdeanu,et al.  The Stanford CoreNLP Natural Language Processing Toolkit , 2014, ACL.

[22]  Yoshua Bengio,et al.  FitNets: Hints for Thin Deep Nets , 2014, ICLR.

[23]  Ian M. Mitchell The Flexible, Extensible and Efficient Toolbox of Level Set Methods , 2008, J. Sci. Comput..

[24]  Andreas Krause,et al.  Safe Model-based Reinforcement Learning with Stability Guarantees , 2017, NIPS.

[25]  Armando Solar-Lezama,et al.  Verifiable Reinforcement Learning via Policy Extraction , 2018, NeurIPS.

[26]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[27]  Sriram Sankaranarayanan,et al.  Reachability analysis for neural feedback systems using regressive polynomial rule inference , 2019, HSCC.

[28]  Greg Turk,et al.  Learning Novel Policies For Tasks , 2019, ICML.

[29]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[30]  Ananthram Swami,et al.  Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).

[31]  Xin Chen,et al.  Probabilistic Safety Verification of Stochastic Hybrid Systems Using Barrier Certificates , 2017, ACM Trans. Embed. Comput. Syst..