Improved Geometric Path Enumeration for Verifying ReLU Neural Networks
暂无分享,去创建一个
Taylor T. Johnson | Stanley Bak | Kerianne Hobbs | Hoang-Dung Tran | Kerianne L. Hobbs | Stanley Bak | Hoang-Dung Tran
[1] Weiming Xiang,et al. Reachable Set Computation and Safety Verification for Neural Networks with ReLU Activations , 2017, ArXiv.
[2] Mahesh Viswanathan,et al. Parsimonious, Simulation Based Verification of Linear Systems , 2016, CAV.
[3] Luca Daniel,et al. Towards Verifying Robustness of Neural Networks Against Semantic Perturbations , 2019, ArXiv.
[4] Swarat Chaudhuri,et al. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[5] Weiming Xiang,et al. Output Reachable Set Estimation and Verification for Multilayer Neural Networks , 2017, IEEE Transactions on Neural Networks and Learning Systems.
[6] Mykel J. Kochenderfer,et al. Policy compression for aircraft collision avoidance systems , 2016, 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC).
[7] Mykel J. Kochenderfer,et al. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.
[8] Mykel J. Kochenderfer,et al. The Marabou Framework for Verification and Analysis of Deep Neural Networks , 2019, CAV.
[9] Rose F. Gamble. A Perspective on Formal Verification , 1993 .
[10] Ashish Tiwari,et al. Output Range Analysis for Deep Feedforward Neural Networks , 2018, NFM.
[11] Dusan M. Stipanovic,et al. Fast Neural Network Verification via Shadow Prices , 2019, ArXiv.
[12] Weiming Xiang,et al. NNV: The Neural Network Verification Tool for Deep Neural Networks and Learning-Enabled Cyber-Physical Systems , 2020, CAV.
[13] Min Wu,et al. Safety Verification of Deep Neural Networks , 2016, CAV.
[14] Mike Marston,et al. ACAS-Xu Initial Self-Separation Flight Tests , 2015 .
[15] Matthew Mirman,et al. Fast and Effective Robustness Certification , 2018, NeurIPS.
[16] Clark W. Barrett,et al. The SMT-LIB Standard Version 2.0 , 2010 .
[17] Manfred Morari,et al. Multi-Parametric Toolbox 3.0 , 2013, 2013 European Control Conference (ECC).
[18] Junfeng Yang,et al. Efficient Formal Safety Analysis of Neural Networks , 2018, NeurIPS.
[19] Mykel J. Kochenderfer,et al. Algorithms for Verifying Deep Neural Networks , 2019, Found. Trends Optim..
[20] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[21] Antoine Girard,et al. Reachability of Uncertain Linear Systems Using Zonotopes , 2005, HSCC.
[22] Junfeng Yang,et al. Formal Security Analysis of Neural Networks using Symbolic Intervals , 2018, USENIX Security Symposium.
[23] Weiming Xiang,et al. Verification for Machine Learning, Autonomy, and Neural Networks Survey , 2018, ArXiv.
[24] Matthias Althoff,et al. ARCH-COMP19 Category Report: Continuous and Hybrid Systems with Linear Continuous Dynamics , 2019, ARCH@CPSIoTWeek.
[25] Timon Gehr,et al. Boosting Robustness Certification of Neural Networks , 2018, ICLR.
[26] Russ Tedrake,et al. Evaluating Robustness of Neural Networks with Mixed Integer Programming , 2017, ICLR.
[27] Weiming Xiang,et al. Star-Based Reachability Analysis of Deep Neural Networks , 2019, FM.
[28] Rüdiger Ehlers,et al. Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks , 2017, ATVA.
[29] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[30] Alessio Lomuscio,et al. An approach to reachability analysis for feed-forward ReLU neural networks , 2017, ArXiv.