An Abstraction-Refinement Approach to Verifying Convolutional Neural Networks
暂无分享,去创建一个
[1] Clark Barrett,et al. An SMT-Based Approach for Verifying Binarized Neural Networks , 2020, TACAS.
[2] Taylor Johnson,et al. The Second International Verification of Neural Networks Competition (VNN-COMP 2021): Summary and Results , 2021, ArXiv.
[3] Mykel J. Kochenderfer,et al. Reluplex: a calculus for reasoning about deep neural networks , 2021, Formal Methods in System Design.
[4] Zahra Rahimi Afzal,et al. Abstraction based Output Range Analysis for Neural Networks , 2020, NeurIPS.
[5] Kyle D. Julian,et al. Parallelization Techniques for Verifying Neural Networks , 2020, 2020 Formal Methods in Computer Aided Design (FMCAD).
[6] Mykel J. Kochenderfer,et al. The Marabou Framework for Verification and Analysis of Deep Neural Networks , 2019, CAV.
[7] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[8] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[9] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[10] Chih-Hong Cheng,et al. Maximum Resilience of Artificial Neural Networks , 2017, ATVA.
[11] Thomas A. Henzinger,et al. Into the unknown: Active monitoring of neural networks , 2021, RV.
[12] Min Zhang,et al. Tightening Robustness Verification of Convolutional Neural Networks with Fine-Grained Linear Approximation , 2021, AAAI.
[13] Junfeng Yang,et al. Formal Security Analysis of Neural Networks using Symbolic Intervals , 2018, USENIX Security Symposium.
[14] Yann LeCun,et al. The mnist database of handwritten digits , 2005 .
[15] Cho-Jui Hsieh,et al. Efficient Neural Network Robustness Certification with General Activation Functions , 2018, NeurIPS.
[16] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[17] Russ Tedrake,et al. Evaluating Robustness of Neural Networks with Mixed Integer Programming , 2017, ICLR.
[18] Mykel J. Kochenderfer,et al. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.
[19] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[20] Daniel Kroening,et al. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability , 2018, Comput. Sci. Rev..
[21] Corina S. Pasareanu,et al. DeepSafe: A Data-Driven Approach for Assessing Robustness of Neural Networks , 2018, ATVA.
[22] Guy Katz,et al. Towards Scalable Verification of Deep Reinforcement Learning , 2021, 2021 Formal Methods in Computer Aided Design (FMCAD).
[23] Edmund M. Clarke,et al. Counterexample-guided abstraction refinement , 2003, 10th International Symposium on Temporal Representation and Reasoning, 2003 and Fourth International Conference on Temporal Logic. Proceedings..
[24] Clark W. Barrett,et al. Provably Minimally-Distorted Adversarial Examples , 2017 .
[25] Michael Schapira,et al. Verifying Deep-RL-Driven Systems , 2019, NetAI@SIGCOMM.
[26] Mykel J. Kochenderfer,et al. Global Optimization of Objective Functions Represented by ReLU Networks , 2021, Machine Learning.
[27] Aditya V. Thakur,et al. Correcting Deep Neural Networks with Small, Generalizing Patches , 2019 .
[28] Jun Zhao,et al. Recurrent Convolutional Neural Networks for Text Classification , 2015, AAAI.
[29] Jan Kretínský,et al. DeepAbstract: Neural Network Abstraction for Accelerating Verification , 2020, ATVA.
[30] Thomas A. Henzinger,et al. Formal Methods with a Touch of Magic , 2020, 2020 Formal Methods in Computer Aided Design (FMCAD).
[31] Ori Lahav,et al. Pruning and Slicing Neural Networks using Formal Verification , 2021, 2021 Formal Methods in Computer Aided Design (FMCAD).
[32] Min Wu,et al. Safety Verification of Deep Neural Networks , 2016, CAV.
[33] Jürgen Schmidhuber,et al. Multi-column deep neural networks for image classification , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.
[34] Dahua Lin,et al. Fastened CROWN: Tightened Neural Network Robustness Certificates , 2019, AAAI.
[35] Alessio Lomuscio,et al. An approach to reachability analysis for feed-forward ReLU neural networks , 2017, ArXiv.
[36] Caterina Urban,et al. Perfectly parallel fairness certification of neural networks , 2020, Proc. ACM Program. Lang..
[37] Yannic Noller,et al. NNrepair: Constraint-based Repair of Neural Network Classifiers , 2021, CAV.
[38] Ashish Tiwari,et al. Output Range Analysis for Deep Neural Networks , 2017, ArXiv.
[39] Isil Dillig,et al. Optimization and abstraction: a synergistic approach for analyzing neural network robustness , 2019, PLDI.
[40] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[41] Antonio Criminisi,et al. Measuring Neural Net Robustness with Constraints , 2016, NIPS.
[42] Sijia Liu,et al. CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks , 2018, AAAI.
[43] Swarat Chaudhuri,et al. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[44] Cho-Jui Hsieh,et al. A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks , 2019, NeurIPS.
[45] Alessio Lomuscio,et al. Formal Verification of CNN-based Perception Systems , 2018, ArXiv.
[46] Shweta Shinde,et al. Quantitative Verification of Neural Networks and Its Security Applications , 2019, CCS.
[47] Inderjit S. Dhillon,et al. Towards Fast Computation of Certified Robustness for ReLU Networks , 2018, ICML.
[48] John Schulman,et al. Concrete Problems in AI Safety , 2016, ArXiv.
[49] Radu Calinescu,et al. DeepCert: Verification of Contextually Relevant Robustness for Neural Network Image Classifiers , 2021, SAFECOMP.
[50] Nathan Srebro,et al. Equality of Opportunity in Supervised Learning , 2016, NIPS.
[51] Timon Gehr,et al. An abstract domain for certifying neural networks , 2019, Proc. ACM Program. Lang..
[52] Weiming Xiang,et al. Verification of Deep Convolutional Neural Networks Using ImageStars , 2020, CAV.
[53] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[54] Guy Katz,et al. Minimal Modifications of Deep Neural Networks using Verification , 2020, LPAR.
[55] R. Sarpong,et al. Bio-inspired synthesis of xishacorenes A, B, and C, and a new congener from fuscol† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c9sc02572c , 2019, Chemical science.
[56] Guy Katz,et al. Minimal Multi-Layer Modifications of Deep Neural Networks , 2021, NSV/FoMLAS@CAV.
[57] Justin Emile Gottschlich,et al. An Abstraction-Based Framework for Neural Network Verification , 2019, CAV.
[58] Christian Tjandraatmadja,et al. Strong mixed-integer programming formulations for trained neural networks , 2018, Mathematical Programming.
[59] Taylor T. Johnson,et al. Reachability Analysis of Convolutional Neural Networks , 2021, ArXiv.
[60] Corina S. Pasareanu,et al. Automated Assume-Guarantee Reasoning by Abstraction Refinement , 2008, CAV.
[61] Rüdiger Ehlers,et al. Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks , 2017, ATVA.
[62] Krishnendu Chatterjee,et al. Run-Time Optimization for Learned Controllers Through Quantitative Games , 2019, CAV.
[63] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[64] Jin Xu. Conv-Reluplex : A Verification Framework For Convolution Neural Networks (S) , 2021, Proceedings of the 33rd International Conference on Software Engineering and Knowledge Engineering.
[65] Clark W. Barrett,et al. Simplifying Neural Networks Using Formal Verification , 2020, NFM.
[66] Nham Le,et al. Verification of Recurrent Neural Networks for Cognitive Tasks via Reachability Analysis , 2020, ECAI.
[67] Guy Katz,et al. Verifying Recurrent Neural Networks using Invariant Inference , 2020, ATVA.
[68] Guy Katz,et al. Verifying learning-augmented systems , 2021, SIGCOMM.
[69] Mykel J. Kochenderfer,et al. Toward Scalable Verification for Safety-Critical Deep Networks , 2018, ArXiv.
[70] Mykel J. Kochenderfer,et al. Towards Proving the Adversarial Robustness of Deep Neural Networks , 2017, FVAV@iFM.
[71] Ekaterina Komendantskaya,et al. Property-driven Training: All You (N)Ever Wanted to Know About , 2021, ArXiv.