WIP: End-to-End Analysis of Adversarial Attacks to Automated Lane Centering Systems
暂无分享,去创建一个
[1] Maximilian Baust,et al. Learning in an Uncertain World: Representing Ambiguity Through Multiple Hypotheses , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[2] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Shichao Xu,et al. Safety-Assured Design and Adaptation of Learning-Enabled Autonomous Systems , 2021, 2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC).
[4] Jiameng Fan,et al. Know the Unknowns: Addressing Disturbances and Uncertainties in Autonomous Systems : Invited Paper , 2020, 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD).
[5] Q. Lu,et al. LGSVL Simulator: A High Fidelity Simulator for Autonomous Driving , 2020, 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC).
[6] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[7] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[8] Ningfei Wang,et al. Hold Tight and Never Let Go: Security of Deep Learning based Automated Lane Centering under Physical-World Attack , 2020, ArXiv.
[9] Wei Li,et al. DeepBillboard: Systematic Physical-World Testing of Autonomous Driving Systems , 2018, 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE).
[10] John McDonald,et al. Application of the Hough Transform to Lane Detection and Following on High Speed Roads , 2001 .
[11] Zhihao Zheng,et al. Robust Detection of Adversarial Attacks by Modeling the Intrinsic Properties of Deep Neural Networks , 2018, NeurIPS.
[12] Daniel Ramos,et al. Deconstructing Cross-Entropy for Probabilistic Binary Classifiers , 2018, Entropy.
[13] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[14] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[15] Kibok Lee,et al. A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks , 2018, NeurIPS.
[16] Atul Prakash,et al. Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.