Reachability Analysis of Deep Neural Networks with Provable Guarantees
暂无分享,去创建一个
[1] Vladimir A. Grishagin,et al. Adaptive nested optimization scheme for multidimensional global search , 2016, J. Glob. Optim..
[2] Mykel J. Kochenderfer,et al. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.
[3] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Vladimir A. Grishagin,et al. Convergence conditions and numerical comparison of global optimization methods based on dimensionality reduction schemes , 2018, Appl. Math. Comput..
[5] Pushmeet Kohli,et al. A Unified View of Piecewise Linear Neural Network Verification , 2017, NeurIPS.
[6] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[7] Min Wu,et al. Safety Verification of Deep Neural Networks , 2016, CAV.
[8] Pushmeet Kohli,et al. Piecewise Linear Neural Network verification: A comparative study , 2017, ArXiv.
[9] Chih-Hong Cheng,et al. Maximum Resilience of Artificial Neural Networks , 2017, ATVA.
[10] Daniel Kroening,et al. Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for L0 Norm , 2018, ArXiv.
[11] John Schulman,et al. Concrete Problems in AI Safety , 2016, ArXiv.
[12] Alessio Lomuscio,et al. An approach to reachability analysis for feed-forward ReLU neural networks , 2017, ArXiv.
[13] Aimo A. Törn,et al. Global Optimization , 1999, Science.
[14] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[15] S. A. Piyavskii. An algorithm for finding the absolute extremum of a function , 1972 .
[16] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[17] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[18] Ashish Tiwari,et al. Output Range Analysis for Deep Neural Networks , 2017, ArXiv.
[19] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[20] Rüdiger Ehlers,et al. Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks , 2017, ATVA.
[21] Ke Chen,et al. Applied Mathematics and Computation , 2022 .
[22] Houshang H. Sohrab. Basic real analysis , 2003 .
[23] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[24] Daniel Kroening,et al. Concolic Testing for Deep Neural Networks , 2018, 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE).
[25] Leonid Ryzhyk,et al. Verifying Properties of Binarized Deep Neural Networks , 2017, AAAI.
[26] Weiming Xiang,et al. Output Reachable Set Estimation and Verification for Multilayer Neural Networks , 2017, IEEE Transactions on Neural Networks and Learning Systems.
[27] Luca Pulina,et al. An Abstraction-Refinement Approach to Verification of Artificial Neural Networks , 2010, CAV.
[28] Matthew Wicker,et al. Feature-Guided Black-Box Safety Testing of Deep Neural Networks , 2017, TACAS.