CAMUS: A Framework to Build Formal Specifications for Deep Perception Systems Using Simulators
暂无分享,去创建一个
Zakaria Chihani | Guillaume Charpiat | Marc Schoenauer | Julien Girard-Satabin | Marc Schoenauer | Zakaria Chihani | G. Charpiat | Julien Girard-Satabin
[1] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[2] Yann Chevaleyre,et al. Robust Neural Networks using Randomized Adversarial Training , 2019, ArXiv.
[3] Kurt Keutzer,et al. Trust Region Based Adversarial Attack on Neural Networks , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Bernhard Schölkopf,et al. First-Order Adversarial Vulnerability of Neural Networks and Input Dimension , 2018, ICML.
[5] Aaron Stump,et al. SMT-COMP: Satisfiability Modulo Theories Competition , 2005, CAV.
[6] Nic Ford,et al. Adversarial Examples Are a Natural Consequence of Test Error in Noise , 2019, ICML.
[7] Russ Tedrake,et al. Verifying Neural Networks with Mixed Integer Programming , 2017, ArXiv.
[8] L. D. Moura,et al. The YICES SMT Solver , 2006 .
[9] Yannick Jestin,et al. An introduction to ACAS Xu and the challenges ahead , 2016, 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC).
[10] Timo Aila,et al. A Style-Based Generator Architecture for Generative Adversarial Networks , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Germán Ros,et al. CARLA: An Open Urban Driving Simulator , 2017, CoRL.
[12] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[13] Luca Pulina,et al. An Abstraction-Refinement Approach to Verification of Artificial Neural Networks , 2010, CAV.
[14] Nikolaj Bjørner,et al. Z3: An Efficient SMT Solver , 2008, TACAS.
[15] Russ Tedrake,et al. Evaluating Robustness of Neural Networks with Mixed Integer Programming , 2017, ICLR.
[16] François Bobot,et al. Real Behavior of Floating Point , 2017, SMT.
[17] Thierry Chateau,et al. Deep MANTA: A Coarse-to-Fine Many-Task Network for Joint 2D and 3D Vehicle Analysis from Monocular Image , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[18] Xin Zhang,et al. End to End Learning for Self-Driving Cars , 2016, ArXiv.
[19] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[20] Bernhard Schölkopf,et al. Adversarial Vulnerability of Neural Networks Increases With Input Dimension , 2018, ArXiv.
[21] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[22] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[23] Cesare Tinelli,et al. Satisfiability Modulo Theories , 2021, Handbook of Satisfiability.
[24] Aleksander Madry,et al. Adversarial Examples Are Not Bugs, They Are Features , 2019, NeurIPS.
[25] Luca Antiga,et al. Automatic differentiation in PyTorch , 2017 .
[26] Patrick Cousot,et al. Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints , 1977, POPL.
[27] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[28] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[29] Sijia Liu,et al. CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks , 2018, AAAI.
[30] Titouan Parcollet,et al. The Pytorch-kaldi Speech Recognition Toolkit , 2018, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[31] Mykel J. Kochenderfer,et al. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.
[32] Mykel J. Kochenderfer,et al. The Marabou Framework for Verification and Analysis of Deep Neural Networks , 2019, CAV.
[33] Pushmeet Kohli,et al. A Unified View of Piecewise Linear Neural Network Verification , 2017, NeurIPS.
[34] Junfeng Yang,et al. Formal Security Analysis of Neural Networks using Symbolic Intervals , 2018, USENIX Security Symposium.
[35] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[36] Sanjit A. Seshia,et al. VerifAI: A Toolkit for the Formal Design and Analysis of Artificial Intelligence-Based Systems , 2019, CAV.