暂无分享,去创建一个
[1] Weiming Xiang,et al. Reachable Set Computation and Safety Verification for Neural Networks with ReLU Activations , 2017, ArXiv.
[2] Eugene H. Gover,et al. Determinants and the volumes of parallelotopes and zonotopes , 2010 .
[3] Timon Gehr,et al. An abstract domain for certifying neural networks , 2019, Proc. ACM Program. Lang..
[4] J. Jossinet. Variability of impedivity in normal and pathological breast tissue , 1996, Medical and Biological Engineering and Computing.
[5] Timon Gehr,et al. Boosting Robustness Certification of Neural Networks , 2018, ICLR.
[6] Timothy A. Mann,et al. On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models , 2018, ArXiv.
[7] Riccardo Leardi,et al. PARVUS: An Extendable Package of Programs for Data Exploration , 1988 .
[8] Antonio Criminisi,et al. Measuring Neural Net Robustness with Constraints , 2016, NIPS.
[9] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[10] Matthias Hein,et al. Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation , 2017, NIPS.
[11] Aditi Raghunathan,et al. Semidefinite relaxations for certifying robustness to adversarial examples , 2018, NeurIPS.
[12] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[13] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[14] Olvi L. Mangasarian,et al. Nuclear feature extraction for breast tumor diagnosis , 1993, Electronic Imaging.
[15] Mykel J. Kochenderfer,et al. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.
[16] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[17] Xiaowei Huang,et al. Reachability Analysis of Deep Neural Networks with Provable Guarantees , 2018, IJCAI.
[18] Matthew Mirman,et al. Differentiable Abstract Interpretation for Provably Robust Neural Networks , 2018, ICML.
[19] Swarat Chaudhuri,et al. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[20] Mykel J. Kochenderfer,et al. Algorithms for Verifying Deep Neural Networks , 2019, Found. Trends Optim..
[21] Wolfgang Kuehn,et al. Rigorously computed orbits of dynamical systems without the wrapping effect , 1998, Computing.
[22] Percy Liang,et al. Certified Defenses for Data Poisoning Attacks , 2017, NIPS.
[23] Matthew Mirman,et al. Fast and Effective Robustness Certification , 2018, NeurIPS.
[24] R. Fisher. THE USE OF MULTIPLE MEASUREMENTS IN TAXONOMIC PROBLEMS , 1936 .
[25] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[26] Thomas F. Brooks,et al. Airfoil self-noise and prediction , 1989 .