Recovering Localized Adversarial Attacks
暂无分享,去创建一个
[1] Daniel P. Huttenlocher,et al. Efficient Graph-Based Image Segmentation , 2004, International Journal of Computer Vision.
[2] Jürgen Schmidhuber,et al. Deep learning in neural networks: An overview , 2014, Neural Networks.
[3] Heiko Wersing,et al. Adversarial attacks hidden in plain sight , 2019, IDA.
[4] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[5] Eric D. Ragan,et al. A Survey of Evaluation Methods and Measures for Interpretable Machine Learning , 2018, ArXiv.
[6] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[7] Matthias Bethge,et al. Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models , 2017, ArXiv.
[8] Barbara Hammer,et al. Using Discriminative Dimensionality Reduction to Visualize Classifiers , 2014, Neural Processing Letters.
[9] Klaus-Robert Müller,et al. Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models , 2017, ArXiv.
[10] Ian J. Goodfellow,et al. Technical Report on the CleverHans v2.1.0 Adversarial Examples Library , 2016 .
[11] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[12] Pascal Vincent,et al. Representation Learning: A Review and New Perspectives , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[13] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[15] Heiko Wersing,et al. Optimal local rejection for classifiers , 2016, Neurocomputing.
[16] W. Brendel,et al. Foolbox: A Python toolbox to benchmark the robustness of machine learning models , 2017 .