暂无分享,去创建一个
Cho-Jui Hsieh | Suman Jana | Shiqi Wang | Kaidi Xu | Xue Lin | Huan Zhang | Yihan Wang
[1] Simon Cruanes,et al. Superposition for Lambda-Free Higher-Order Logic , 2018, IJCAR.
[2] J. Zico Kolter,et al. Scaling provable adversarial defenses , 2018, NeurIPS.
[3] Matthew Mirman,et al. Differentiable Abstract Interpretation for Provably Robust Neural Networks , 2018, ICML.
[4] M. Pawan Kumar,et al. Neural Network Branching for Neural Network Verification , 2019, ICLR.
[5] Swarat Chaudhuri,et al. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[6] Pushmeet Kohli,et al. A Unified View of Piecewise Linear Neural Network Verification , 2017, NeurIPS.
[7] Cho-Jui Hsieh,et al. Efficient Neural Network Robustness Certification with General Activation Functions , 2018, NeurIPS.
[8] Isil Dillig,et al. Optimization and abstraction: a synergistic approach for analyzing neural network robustness , 2019, PLDI.
[9] Pushmeet Kohli,et al. Efficient Neural Network Verification with Exactness Characterization , 2019, UAI.
[10] Martin Vechev,et al. Beyond the Single Neuron Convex Barrier for Neural Network Certification , 2019, NeurIPS.
[11] Junfeng Yang,et al. Efficient Formal Safety Analysis of Neural Networks , 2018, NeurIPS.
[12] Dahua Lin,et al. Fastened CROWN: Tightened Neural Network Robustness Certificates , 2019, AAAI.
[13] Cho-Jui Hsieh,et al. Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond , 2020, NeurIPS.
[14] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[15] Cho-Jui Hsieh,et al. A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks , 2019, NeurIPS.
[16] Yizheng Chen,et al. MixTrain: Scalable Training of Formally Robust Neural Networks , 2018, ArXiv.
[17] Cho-Jui Hsieh,et al. Towards Stable and Efficient Training of Verifiably Robust Neural Networks , 2019, ICLR.
[18] Inderjit S. Dhillon,et al. Towards Fast Computation of Certified Robustness for ReLU Networks , 2018, ICML.
[19] Russ Tedrake,et al. Evaluating Robustness of Neural Networks with Mixed Integer Programming , 2017, ICLR.
[20] Mykel J. Kochenderfer,et al. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.
[21] Gagandeep Singh,et al. Neural Network Robustness Verification on GPUs , 2020, ArXiv.
[22] Timon Gehr,et al. Boosting Robustness Certification of Neural Networks , 2018, ICLR.
[23] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[24] Pushmeet Kohli,et al. Training verified learners with learned verifiers , 2018, ArXiv.
[25] Min Wu,et al. Safety Verification of Deep Neural Networks , 2016, CAV.
[26] Ashish Tiwari,et al. Output Range Analysis for Deep Feedforward Neural Networks , 2018, NFM.
[27] Pushmeet Kohli,et al. Branch and Bound for Piecewise Linear Neural Network Verification , 2020, J. Mach. Learn. Res..
[28] Matthew Mirman,et al. Fast and Effective Robustness Certification , 2018, NeurIPS.
[29] Ian Goodfellow,et al. Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming , 2020, NeurIPS.
[30] Pushmeet Kohli,et al. Lagrangian Decomposition for Neural Network Verification , 2020, UAI.
[31] Rüdiger Ehlers,et al. Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks , 2017, ATVA.
[32] Aditi Raghunathan,et al. Semidefinite relaxations for certifying robustness to adversarial examples , 2018, NeurIPS.
[33] Timon Gehr,et al. An abstract domain for certifying neural networks , 2019, Proc. ACM Program. Lang..
[34] Juan Pablo Vielma,et al. The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron Relaxations for Neural Network Verification , 2020, NeurIPS.
[35] Junfeng Yang,et al. Formal Security Analysis of Neural Networks using Symbolic Intervals , 2018, USENIX Security Symposium.
[36] Dusan M. Stipanovic,et al. Fast Neural Network Verification via Shadow Prices , 2019, ArXiv.