暂无分享,去创建一个
Ekaterina Komendantskaya | Daniel Kienitz | Wen Kokke | Marco Casadio | Matthew Daggitt | Rob Stewart
[1] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[2] Cem Anil,et al. Sorting out Lipschitz function approximation , 2018, ICML.
[3] Francisco Eiras,et al. PaRoT: A Practical Framework for Robust Deep Neural Network Training , 2020, NFM.
[4] Frank Allgöwer,et al. Training Robust Neural Networks Using Lipschitz Bounds , 2020, IEEE Control Systems Letters.
[5] Bernhard Pfahringer,et al. Regularisation of neural networks by enforcing Lipschitz continuity , 2018, Machine Learning.
[6] Mykel J. Kochenderfer,et al. The Marabou Framework for Verification and Analysis of Deep Neural Networks , 2019, CAV.
[7] Maneesh Kumar Singh,et al. Lipschitz Properties for Deep Convolutional Networks , 2017, ArXiv.
[8] Taylor T. Johnson,et al. Improved Geometric Path Enumeration for Verifying ReLU Neural Networks , 2020, CAV.
[9] Johannes Stallkamp,et al. The German Traffic Sign Recognition Benchmark: A multi-class classification competition , 2011, The 2011 International Joint Conference on Neural Networks.
[10] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[11] Guy Van den Broeck,et al. A Semantic Loss Function for Deep Learning with Symbolic Knowledge , 2017, ICML.
[12] Taghi M. Khoshgoftaar,et al. A survey on Image Data Augmentation for Deep Learning , 2019, Journal of Big Data.
[13] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[14] Swarat Chaudhuri,et al. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[15] Min Wu,et al. Safety Verification of Deep Neural Networks , 2016, CAV.
[16] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[17] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[18] Ekaterina Komendantskaya,et al. Continuous Verification of Machine Learning: a Declarative Programming Approach , 2020, PPDP.
[19] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[20] Mislav Balunovic,et al. DL2: Training and Querying Neural Networks with Logic , 2019, ICML.
[21] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[22] Aditi Raghunathan,et al. Adversarial Training Can Hurt Generalization , 2019, ArXiv.
[23] Timon Gehr,et al. An abstract domain for certifying neural networks , 2019, Proc. ACM Program. Lang..
[24] Luca Rigazio,et al. Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.