暂无分享,去创建一个
Sebastian Sudholt | Oliver Willers | Shervin Raafatnia | Stephanie Abrecht | Oliver Willers | Sebastian Sudholt | Stephanie Abrecht | Shervin Raafatnia
[1] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[2] Nir Morgulis,et al. Fooling a Real Car with Adversarial Traffic Signs , 2019, ArXiv.
[3] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[4] Simon Burton,et al. Confidence Arguments for Evidence of Performance in Machine Learning for Highly Automated Driving Functions , 2019, SAFECOMP Workshops.
[5] Kush R. Varshney,et al. Engineering safety in machine learning , 2016, 2016 Information Theory and Applications Workshop (ITA).
[6] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[7] Colin Raffel,et al. Thermometer Encoding: One Hot Way To Resist Adversarial Examples , 2018, ICLR.
[8] Julien Cornebise,et al. Weight Uncertainty in Neural Network , 2015, ICML.
[9] Christoph H. Lampert,et al. Attribute-Based Classification for Zero-Shot Visual Object Categorization , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[10] Mark Lee,et al. On Physical Adversarial Patches for Object Detection , 2019, ArXiv.
[11] Aleksander Madry,et al. Exploring the Landscape of Spatial Robustness , 2017, ICML.
[12] Kilian Q. Weinberger,et al. On Calibration of Modern Neural Networks , 2017, ICML.
[13] Oliver Zendel,et al. CV-HAZOP: Introducing Test Data Validation for Computer Vision , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[14] Rick Salay,et al. An Analysis of ISO 26262: Machine Learning and Safety in Automotive Software , 2018 .
[15] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[16] Julien Cornebise,et al. Weight Uncertainty in Neural Networks , 2015, ArXiv.
[17] Thomas G. Dietterich,et al. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , 2019, ICLR.
[18] Martín Abadi,et al. Adversarial Patch , 2017, ArXiv.
[19] J. Zico Kolter,et al. Adversarial camera stickers: A physical camera-based attack on deep learning systems , 2019, ICML.
[20] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[21] Simon Burton,et al. Structuring Validation Targets of a Machine Learning Function Applied to Automated Driving , 2018, SAFECOMP.
[22] Philip Koopman,et al. How Many Operational Design Domains, Objects, and Events? , 2019, SafeAI@AAAI.
[23] Tim Kelly,et al. Establishing Safety Criteria for Artificial Neural Networks , 2003, KES.
[24] Oliver Zendel,et al. WildDash - Creating Hazard-Aware Benchmarks , 2018, ECCV.
[25] Zhitao Gong,et al. Strike (With) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[26] Dawn Song,et al. Physical Adversarial Examples for Object Detectors , 2018, WOOT @ USENIX Security Symposium.
[27] Mark Harman,et al. Machine Learning Testing: Survey, Landscapes and Horizons , 2019, IEEE Transactions on Software Engineering.
[28] Yan Liu,et al. Application of Neural Networks in High Assurance Systems: A Survey , 2010, Applications of Neural Networks in High Assurance Systems.
[29] Gábor Lugosi,et al. Introduction to Statistical Learning Theory , 2004, Advanced Lectures on Machine Learning.
[30] Christian Haase-Schuetz,et al. Estimating Labeling Quality with Deep Object Detectors , 2019, 2019 IEEE Intelligent Vehicles Symposium (IV).
[31] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[32] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[33] Andrea Bondavalli,et al. On the Safety of Automotive Systems Incorporating Machine Learning Based Components: A Position Paper , 2018, 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W).
[34] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.
[35] Milos Hauskrecht,et al. Obtaining Well Calibrated Probabilities Using Bayesian Binning , 2015, AAAI.
[36] Dawn Song,et al. Robust Physical-World Attacks on Deep Learning Models , 2017, 1707.08945.
[37] R. F. Griffiths,et al. HAZOP and HAZAN: Notes on the identification and assessment of hazards : by T.A. Kletz, Institution of Chemical Engineers, Rugby, 1983, ISBN 0-85295-165-5, 81 pages, paperback, £8.00 incl. postage and packing. , 1984 .
[38] Simon Burton,et al. Making the Case for Safety of Machine Learning in Highly Automated Driving , 2017, SAFECOMP Workshops.
[39] John Schulman,et al. Concrete Problems in AI Safety , 2016, ArXiv.
[40] Matthias Hein,et al. Why ReLU Networks Yield High-Confidence Predictions Far Away From the Training Data and How to Mitigate the Problem , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[41] J. Zico Kolter,et al. Scaling provable adversarial defenses , 2018, NeurIPS.
[42] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[43] Patrik Feth,et al. Hardening of Artificial Neural Networks for Use in Safety-Critical Applications - A Mapping Study , 2019, ArXiv.
[44] Pan He,et al. Adversarial Examples: Attacks and Defenses for Deep Learning , 2017, IEEE Transactions on Neural Networks and Learning Systems.