Adversarial Training is Not Ready for Robot Learning
暂无分享,去创建一个
Thomas A. Henzinger | Radu Grosu | Daniela Rus | Mathias Lechner | Ramin Hasani | Ramin M. Hasani | T. Henzinger | D. Rus | R. Grosu | Mathias Lechner
[1] Radu Grosu,et al. Neural circuit policies enabling auditable autonomy , 2020, Nature Machine Intelligence.
[2] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[3] Timothy A. Mann,et al. On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models , 2018, ArXiv.
[4] Greg Yang,et al. Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers , 2019, NeurIPS.
[5] Patric Jensfelt,et al. Adversarial Feature Training for Generalizable Robotic Visuomotor Control , 2019, 2020 IEEE International Conference on Robotics and Automation (ICRA).
[7] Rao R. Bhavani,et al. Follow me robot using bluetooth-based position estimation , 2017, 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI).
[8] Bernt Schiele,et al. Disentangling Adversarial Robustness and Generalization , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[9] Paul Newman,et al. Adversarial Training for Adverse Conditions: Robust Metric Localisation Using Appearance Transfer , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).
[10] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[11] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[12] Mathias Lechner,et al. Learning Long-Term Dependencies in Irregularly-Sampled Time Series , 2020, NeurIPS.
[13] Ilya P. Razenshteyn,et al. Adversarial examples from computational constraints , 2018, ICML.
[14] Radu Grosu,et al. Gershgorin Loss Stabilizes the Recurrent Neural Network Compartment of an End-to-end Robot Learning Scheme , 2020, 2020 IEEE International Conference on Robotics and Automation (ICRA).
[15] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[16] Gunnar Rätsch,et al. Advanced lectures on machine learning : ML Summer Schools 2003, Canberra, Australia, February 2-14, 2003, Tübingen, Germany, August 4-16, 2003 : revised lectures , 2004 .
[17] Aleksander Madry,et al. Image Synthesis with a Single (Robust) Classifier , 2019, NeurIPS.
[18] Inderjit S. Dhillon,et al. Towards Fast Computation of Certified Robustness for ReLU Networks , 2018, ICML.
[19] Michael W. Mahoney,et al. Adversarially-Trained Deep Nets Transfer Better , 2020, ArXiv.
[20] Richard Bowden,et al. Training Adversarial Agents to Exploit Weaknesses in Deep Control Policies , 2020, 2020 IEEE International Conference on Robotics and Automation (ICRA).
[21] J. Miura,et al. Robust Stereo-Based Person Detection and Tracking for a Person Following Robot , 2009 .
[22] Po-Sen Huang,et al. Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation , 2019, EMNLP/IJCNLP.
[23] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[24] Jinfeng Yi,et al. Is Robustness the Cost of Accuracy? - A Comprehensive Study on the Robustness of 18 Deep Image Classification Models , 2018, ECCV.
[25] Shin Ishii,et al. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[26] Yang Gao,et al. Risk Averse Robust Adversarial Reinforcement Learning , 2019, 2019 International Conference on Robotics and Automation (ICRA).
[27] Gert Kootstra,et al. International Conference on Robotics and Automation (ICRA) , 2008, ICRA 2008.
[28] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[29] Radu Grosu,et al. A Machine Learning Suite for Machine Components' Health-Monitoring , 2019, AAAI.
[30] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[31] Guoquan Huang,et al. Map-Based Localization Under Adversarial Attacks , 2019, ISRR.
[32] Radu Grosu,et al. Designing Worm-inspired Neural Networks for Interpretable Robotic Control , 2019, 2019 International Conference on Robotics and Automation (ICRA).
[33] Radu Grosu,et al. Liquid Time-constant Networks , 2020, AAAI.
[34] Aleksander Madry,et al. Adversarial Examples Are Not Bugs, They Are Features , 2019, NeurIPS.
[35] Tom Goldstein,et al. Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets , 2019, ArXiv.
[36] Aleksander Madry,et al. Learning Perceptually-Aligned Representations via Adversarial Robustness , 2019, ArXiv.
[37] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[38] Pratap Tokekar,et al. Robust Multiple-Path Orienteering Problem: Securing Against Adversarial Attacks , 2020, Robotics: Science and Systems.
[39] Salman Afghani,et al. FOLLOW ME ROBOT USING INFRARED BEACONS , 2013 .
[40] Simran Kaur,et al. Are Perceptually-Aligned Gradients a General Property of Robust Classifiers? , 2019, ArXiv.
[41] Matteo Munaro,et al. Tracking people within groups with RGB-D data , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[42] Bernhard Scholkopf. Causality for Machine Learning , 2019 .
[43] Hiroshi Mizoguchi,et al. Development of a Person Following Robot with Vision Based Target Detection , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[44] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[45] David Jacobs,et al. Adversarially robust transfer learning , 2020, ICLR.
[46] Ilya P. Razenshteyn,et al. Randomized Smoothing of All Shapes and Sizes , 2020, ICML.
[47] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[48] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[49] Suman Jana,et al. Certified Robustness to Adversarial Examples with Differential Privacy , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[50] Ashish Kapoor,et al. Do Adversarially Robust ImageNet Models Transfer Better? , 2020, NeurIPS.
[51] Jonathan P. How,et al. Active Perception in Adversarial Scenarios using Maximum Entropy Deep Reinforcement Learning , 2019, 2019 International Conference on Robotics and Automation (ICRA).
[52] Masayoshi Tomizuka,et al. Interaction-aware Multi-agent Tracking and Probabilistic Behavior Prediction via Adversarial Learning , 2019, 2019 International Conference on Robotics and Automation (ICRA).
[53] Beomsu Kim,et al. Bridging Adversarial Robustness and Gradient Interpretability , 2019, ArXiv.
[54] Marco Pavone,et al. Safe Motion Planning in Unknown Environments: Optimality Benchmarks and Tractable Policies , 2018, Robotics: Science and Systems.
[55] Yuanzhi Li,et al. Feature Purification: How Adversarial Training Performs Robust Deep Learning , 2020, 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS).
[56] Radu Grosu,et al. Model-based versus Model-free Deep Reinforcement Learning for Autonomous Racing Cars , 2021, ArXiv.
[57] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[58] Pushmeet Kohli,et al. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks , 2018, ICML.
[59] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[60] A. Hardness,et al. Towards Fast Computation of Certified Robustness for ReLU Networks , 2018 .
[61] Aditi Raghunathan,et al. Adversarial Training Can Hurt Generalization , 2019, ArXiv.
[62] Misha Denil,et al. Task-Relevant Adversarial Imitation Learning , 2019, CoRL.
[63] Radu Grosu,et al. On The Verification of Neural ODEs with Stochastic Guarantees , 2020, AAAI.
[64] Alan L. Yuille,et al. Feature Denoising for Improving Adversarial Robustness , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[65] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[66] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.