CLEVEREST: Accelerating CEGAR-based Neural Network Verification via Adversarial Attacks

[1]  M. Zhang,et al.  QVIP: An ILP-based Formal Verification Approach for Quantized Neural Networks , 2022, ASE.

[2]  Sen Chen,et al.  AS2T: Arbitrary Source-To-Target Adversarial Attack on Speaker Recognition Systems , 2022, IEEE Transactions on Dependable and Secure Computing.

[3]  Guy Katz,et al.  An Abstraction-Refinement Approach to Verifying Convolutional Neural Networks , 2022, ATVA.

[4]  Fu Song,et al.  Eager Falsification for Accelerating Robustness Verification of Deep Neural Networks , 2021, 2021 IEEE 32nd International Symposium on Software Reliability Engineering (ISSRE).

[5]  Martin T. Vechev,et al.  Shared Certificates for Neural Network Verification , 2021, CAV.

[6]  Martin Rinard,et al.  Verifying Low-dimensional Input Neural Networks via Input Quantization , 2021, SAS.

[7]  Sriram Sankaranarayanan,et al.  Static analysis of ReLU neural networks with tropical polyhedra , 2021, SAS.

[8]  Liqian Chen,et al.  Enhancing Robustness Verification for Deep Neural Networks via Symbolic Propagation , 2021, Formal Aspects of Computing.

[9]  Jun Sun,et al.  Attack as defense: characterizing adversarial examples using robustness , 2021, ISSTA.

[10]  Taolue Chen,et al.  BDD4BNN: A BDD-based Quantitative Analysis Framework for Binarized Neural Networks , 2021, CAV.

[11]  Mark Niklas Müller,et al.  PRIMA: general and precise neural network certification via scalable convex hull approximations , 2021, Proc. ACM Program. Lang..

[12]  Fu Song,et al.  Verifying ReLU Neural Networks from a Model Checking Perspective , 2020, Journal of Computer Science and Technology.

[13]  Jun Sun,et al.  Improving Neural Network Verification through Spurious Region Guided Refinement , 2020, TACAS.

[14]  Jianye Hao,et al.  An Empirical Study on Correlation between Coverage and Robustness for Deep Neural Networks , 2020, 2020 25th International Conference on Engineering of Complex Computer Systems (ICECCS).

[15]  Matthew Sotoudeh,et al.  Abstract Neural Networks , 2020, SAS.

[16]  Nham Le,et al.  Verification of Recurrent Neural Networks for Cognitive Tasks via Reachability Analysis , 2020, ECAI.

[17]  Martin T. Vechev,et al.  Provably Robust Adversarial Examples , 2020, ICLR.

[18]  Zahra Rahimi Afzal,et al.  Abstraction based Output Range Analysis for Neural Networks , 2020, NeurIPS.

[19]  Jan Kretínský,et al.  DeepAbstract: Neural Network Abstraction for Accelerating Verification , 2020, ATVA.

[20]  Clark W. Barrett,et al.  Simplifying Neural Networks Using Formal Verification , 2020, NFM.

[21]  Yang Liu,et al.  Advanced evasion attacks and mitigations on practical ML‐based phishing website classifiers , 2020, Int. J. Intell. Syst..

[22]  Caterina Urban,et al.  Perfectly parallel fairness certification of neural networks , 2019, Proc. ACM Program. Lang..

[23]  Yang Liu,et al.  Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems , 2019, 2021 IEEE Symposium on Security and Privacy (SP).

[24]  Justin Emile Gottschlich,et al.  An Abstraction-Based Framework for Neural Network Verification , 2019, CAV.

[25]  Weiming Xiang,et al.  Star-Based Reachability Analysis of Deep Neural Networks , 2019, FM.

[26]  Pushmeet Kohli,et al.  Branch and Bound for Piecewise Linear Neural Network Verification , 2019, J. Mach. Learn. Res..

[27]  Mykel J. Kochenderfer,et al.  The Marabou Framework for Verification and Analysis of Deep Neural Networks , 2019, CAV.

[28]  Zhengfeng Yang,et al.  Robustness Verification of Classification Deep Neural Networks via Linear Programming , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[29]  Fu Song,et al.  Taking Care of the Discretization Problem: A Comprehensive Study of the Discretization Problem and a Black-Box Adversarial Attack in Discrete Integer Domain , 2019, IEEE Transactions on Dependable and Secure Computing.

[30]  Liqian Chen,et al.  Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification , 2019, SAS.

[31]  Timon Gehr,et al.  An abstract domain for certifying neural networks , 2019, Proc. ACM Program. Lang..

[32]  Junfeng Yang,et al.  Efficient Formal Safety Analysis of Neural Networks , 2018, NeurIPS.

[33]  Shin Yoo,et al.  Guiding Deep Learning System Testing Using Surprise Adequacy , 2018, 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE).

[34]  Swarat Chaudhuri,et al.  AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation , 2018, 2018 IEEE Symposium on Security and Privacy (SP).

[35]  Lei Ma,et al.  DeepMutation: Mutation Testing of Deep Learning Systems , 2018, 2018 IEEE 29th International Symposium on Software Reliability Engineering (ISSRE).

[36]  Daniel Kroening,et al.  Concolic Testing for Deep Neural Networks , 2018, 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE).

[37]  Junfeng Yang,et al.  Formal Security Analysis of Neural Networks using Symbolic Intervals , 2018, USENIX Security Symposium.

[38]  Ashish Tiwari,et al.  Output Range Analysis for Deep Feedforward Neural Networks , 2018, NFM.

[39]  Lei Ma,et al.  DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems , 2018, 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE).

[40]  Russ Tedrake,et al.  Evaluating Robustness of Neural Networks with Mixed Integer Programming , 2017, ICLR.

[41]  J. Zico Kolter,et al.  Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.

[42]  Jinfeng Yi,et al.  ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models , 2017, AISec@CCS.

[43]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[44]  Junfeng Yang,et al.  DeepXplore: Automated Whitebox Testing of Deep Learning Systems , 2017, SOSP.

[45]  Rüdiger Ehlers,et al.  Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks , 2017, ATVA.

[46]  Mykel J. Kochenderfer,et al.  Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.

[47]  Min Wu,et al.  Safety Verification of Deep Neural Networks , 2016, CAV.

[48]  Mykel J. Kochenderfer,et al.  Policy compression for aircraft collision avoidance systems , 2016, 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC).

[49]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[50]  Samy Bengio,et al.  Adversarial examples in the physical world , 2016, ICLR.

[51]  Ananthram Swami,et al.  The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).

[52]  Seyed-Mohsen Moosavi-Dezfooli,et al.  DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[53]  Heike Wehrheim,et al.  Just Test What You Cannot Verify! , 2015, FASE.

[54]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[55]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[56]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[57]  Hongseok Yang,et al.  Abstractions from tests , 2012, POPL '12.

[58]  Luca Pulina,et al.  An Abstraction-Refinement Approach to Verification of Artificial Neural Networks , 2010, CAV.

[59]  Robert J. Simmons,et al.  Proofs from Tests , 2008, IEEE Transactions on Software Engineering.

[60]  Thomas A. Henzinger,et al.  SYNERGY: a new algorithm for property checking , 2006, SIGSOFT '06/FSE-14.

[61]  Thomas Ball,et al.  Testing, abstraction, theorem proving: better together! , 2006, ISSTA '06.

[62]  Pankaj Jalote,et al.  Program partitioning: a framework for combining static and dynamic analysis , 2006, WODA '06.

[63]  Helmut Veith,et al.  Counterexample-guided abstraction refinement for symbolic model checking , 2003, JACM.

[64]  Caterina Urban,et al.  Reduced Products of Abstract Domains for Fairness Certification of Neural Networks , 2021, SAS.

[65]  Alessandro Orso,et al.  Probabilistic Lipschitz Analysis of Neural Networks , 2020, SAS.

[66]  Matthew Mirman,et al.  Fast and Effective Robustness Certification , 2018, NeurIPS.