暂无分享,去创建一个
Iulian Ober | Ileana Ober | Christophe Gabreau | Guillaume Vidot | I. Ober | Iulian Ober | Christophe Gabreau | Guillaume Vidot
[1] Aleksander Madry,et al. On Evaluating Adversarial Robustness , 2019, ArXiv.
[2] Rob Fergus,et al. Visualizing and Understanding Convolutional Networks , 2013, ECCV.
[3] Been Kim,et al. Sanity Checks for Saliency Maps , 2018, NeurIPS.
[4] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[5] Jinfeng Yi,et al. ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models , 2017, AISec@CCS.
[6] Matthew Mirman,et al. Differentiable Abstract Interpretation for Provably Robust Neural Networks , 2018, ICML.
[7] Xin Zhang,et al. TFX: A TensorFlow-Based Production-Scale Machine Learning Platform , 2017, KDD.
[8] Bin Dong,et al. You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle , 2019, NeurIPS.
[9] Fabio Roli,et al. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2018, CCS.
[10] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[11] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[12] Leslie G. Valiant,et al. A theory of the learnable , 1984, STOC '84.
[13] Masashi Sugiyama,et al. Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks , 2018, NeurIPS.
[14] Jim Krodel. Technology Changes In Aeronautical Systems , 2008 .
[15] Ali Farhadi,et al. You Only Look Once: Unified, Real-Time Object Detection , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[16] Mykel J. Kochenderfer,et al. The Marabou Framework for Verification and Analysis of Deep Neural Networks , 2019, CAV.
[17] Junfeng Yang,et al. Formal Security Analysis of Neural Networks using Symbolic Intervals , 2018, USENIX Security Symposium.
[18] Hao Chen,et al. Practical No-box Adversarial Attacks against DNNs , 2020, NeurIPS.
[19] John Rushby,et al. The Interpretation and Evaluation of Assurance Cases , 2015 .
[20] Cho-Jui Hsieh,et al. A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks , 2019, NeurIPS.
[21] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[22] Trevor Hastie,et al. Causal Interpretations of Black-Box Models , 2019, Journal of business & economic statistics : a publication of the American Statistical Association.
[23] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[24] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[25] Moustapha Cissé,et al. Parseval Networks: Improving Robustness to Adversarial Examples , 2017, ICML.
[26] Trevor Darrell,et al. Generating Visual Explanations , 2016, ECCV.
[27] Franco Turini,et al. Local Rule-Based Explanations of Black Box Decision Systems , 2018, ArXiv.
[28] Sylvaine Picard,et al. Ensuring Dataset Quality for Machine Learning Certification , 2020, 2020 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW).
[29] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[30] Nic Ford,et al. Adversarial Examples Are a Natural Consequence of Test Error in Noise , 2019, ICML.
[31] Mark A. Przybocki,et al. Four Principles of Explainable Artificial Intelligence , 2020 .
[32] Timon Gehr,et al. Boosting Robustness Certification of Neural Networks , 2018, ICLR.
[33] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[34] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[35] Valentina Zantedeschi,et al. Efficient Defenses Against Adversarial Attacks , 2017, AISec@CCS.
[36] Alex Kendall,et al. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? , 2017, NIPS.
[37] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[38] Radu Calinescu,et al. Assuring the Machine Learning Lifecycle , 2019, ACM Comput. Surv..
[39] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[40] David A. McAllester. Some PAC-Bayesian Theorems , 1998, COLT' 98.
[41] A. Hanuschkin,et al. Towards CRISP-ML(Q): A Machine Learning Process Model with Quality Assurance Methodology , 2020, Mach. Learn. Knowl. Extr..
[42] Qian Huang,et al. Enhancing Adversarial Example Transferability With an Intermediate Level Attack , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[43] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[44] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[45] Daniel Kroening,et al. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability , 2018, Comput. Sci. Rev..
[46] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[47] Swarat Chaudhuri,et al. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[48] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[49] Cho-Jui Hsieh,et al. Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond , 2020, NeurIPS.
[50] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[51] Varun Kanade,et al. On the Hardness of Robust Classification , 2019, Electron. Colloquium Comput. Complex..
[52] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[53] Yarin Gal,et al. Real Time Image Saliency for Black Box Classifiers , 2017, NIPS.
[54] Shiliang Sun,et al. PAC-bayes bounds with data dependent priors , 2012, J. Mach. Learn. Res..
[55] Ulrike von Luxburg,et al. Explaining the Explainer: A First Theoretical Analysis of LIME , 2020, AISTATS.
[56] Mykel J. Kochenderfer,et al. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.
[57] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[58] Min Wu,et al. Safety Verification of Deep Neural Networks , 2016, CAV.
[59] Yangyong Zhu,et al. The Challenges of Data Quality and Data Quality Assessment in the Big Data Era , 2015, Data Sci. J..
[60] Kui Ren,et al. Adversarial Attacks and Defenses in Deep Learning , 2020, Engineering.
[61] John Shawe-Taylor,et al. A PAC analysis of a Bayesian estimator , 1997, COLT '97.
[62] Harald C. Gall,et al. Software Engineering for Machine Learning: A Case Study , 2019, 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP).
[63] Patrick D. McDaniel,et al. Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning , 2018, ArXiv.