暂无分享,去创建一个
[1] Aleksander Madry,et al. Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability , 2018, ICLR.
[2] John Tran,et al. cuDNN: Efficient Primitives for Deep Learning , 2014, ArXiv.
[3] Aleksander Madry,et al. On Adaptive Attacks to Adversarial Example Defenses , 2020, NeurIPS.
[4] Jeff Johnson,et al. Fast Convolutional Nets With fbfft: A GPU Performance Evaluation , 2014, ICLR.
[5] Alessio Lomuscio,et al. An approach to reachability analysis for feed-forward ReLU neural networks , 2017, ArXiv.
[6] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[7] Ran El-Yaniv,et al. Binarized Neural Networks , 2016, ArXiv.
[8] Matteo Fischetti,et al. Deep neural networks and mixed integer linear optimization , 2018, Constraints.
[9] Rüdiger Ehlers,et al. Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks , 2017, ATVA.
[10] Harini Kannan,et al. Adversarial Logit Pairing , 2018, NIPS 2018.
[11] Chih-Hong Cheng,et al. Maximum Resilience of Artificial Neural Networks , 2017, ATVA.
[12] Aditi Raghunathan,et al. Semidefinite relaxations for certifying robustness to adversarial examples , 2018, NeurIPS.
[13] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[14] Leonid Ryzhyk,et al. Verifying Properties of Binarized Deep Neural Networks , 2017, AAAI.
[15] Matthew Mirman,et al. Differentiable Abstract Interpretation for Provably Robust Neural Networks , 2018, ICML.
[16] Andrew Lavin,et al. Fast Algorithms for Convolutional Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[17] Inderjit S. Dhillon,et al. Towards Fast Computation of Certified Robustness for ReLU Networks , 2018, ICML.
[18] Antoine Miné,et al. Relational Abstract Domains for the Detection of Floating-Point Run-Time Errors , 2004, ESOP.
[19] Bernd Becker,et al. Towards Verification of Artificial Neural Networks , 2015, MBMV.
[20] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[21] Martin Rinard,et al. Correctness Verification of Neural Networks , 2019, ArXiv.
[22] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[23] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[24] Swarat Chaudhuri,et al. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[25] Mykel J. Kochenderfer,et al. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.
[26] Adnan Darwiche,et al. Verifying Binarized Neural Networks by Angluin-Style Learning , 2019, SAT.
[27] Russ Tedrake,et al. Evaluating Robustness of Neural Networks with Mixed Integer Programming , 2017, ICLR.
[28] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[29] Ashish Tiwari,et al. Output Range Analysis for Deep Feedforward Neural Networks , 2018, NFM.
[30] Timon Gehr,et al. An abstract domain for certifying neural networks , 2019, Proc. ACM Program. Lang..
[31] Min Wu,et al. Safety Verification of Deep Neural Networks , 2016, CAV.
[32] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[33] Pushmeet Kohli,et al. Training verified learners with learned verifiers , 2018, ArXiv.
[34] Cho-Jui Hsieh,et al. Efficient Neural Network Robustness Certification with General Activation Functions , 2018, NeurIPS.
[35] Matthew Mirman,et al. Fast and Effective Robustness Certification , 2018, NeurIPS.