An Investigation of Bounded Misclassificationfor Operational Security of Deep Neural Networks
暂无分享,去创建一个
[1] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[2] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[3] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[5] Pieter Abbeel,et al. Combined task and motion planning through an extensible planner-independent interface layer , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).
[6] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[7] Shigeo Abe DrEng. Pattern Classification , 2001, Springer London.
[8] Christiane Fellbaum,et al. Book Reviews: WordNet: An Electronic Lexical Database , 1999, CL.
[9] Subbarao Kambhampati,et al. On the Relations Between Intelligent Backtracking and Failure-Driven Explanation-Based Learning in Constraint Satisfaction and Planning , 1998, Artif. Intell..
[10] James A. Hendler,et al. HTN Planning: Complexity and Expressivity , 1994, AAAI.
[11] Michael J. Pazzani,et al. Reducing Misclassification Costs , 1994, ICML.