暂无分享,去创建一个
[1] Xiaoyu Cao,et al. Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification , 2017, ACSAC.
[2] Mario Michael Krell,et al. A Capacity Scaling Law for Artificial Neural Networks , 2017, ArXiv.
[3] Bo Li,et al. The Helmholtz Method: Using Perceptual Compression to Reduce Machine Learning Complexity , 2018, ArXiv.
[4] Raúl Rojas,et al. Neural Networks - A Systematic Introduction , 1996 .
[5] Xin Li,et al. Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[6] Martin Wattenberg,et al. Adversarial Spheres , 2018, ICLR.
[7] R. Venkatesh Babu,et al. Fast Feature Fool: A data independent approach to universal adversarial perturbations , 2017, BMVC.
[8] Pascal Frossard,et al. Manitest: Are classifiers really invariant? , 2015, BMVC.
[9] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[10] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[11] David A. Wagner,et al. Audio Adversarial Examples: Targeted Attacks on Speech-to-Text , 2018, 2018 IEEE Security and Privacy Workshops (SPW).
[12] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[13] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[14] J. Astola,et al. ON BETWEEN-COEFFICIENT CONTRAST MASKING OF DCT BASIS FUNCTIONS , 2007 .
[15] Mario Michael Krell,et al. A Practical Approach to Sizing Neural Networks , 2018, ArXiv.
[16] Dan Boneh,et al. The Space of Transferable Adversarial Examples , 2017, ArXiv.
[17] Cewu Lu,et al. LiDAR-Video Driving Dataset: Learning Driving Policies Effectively , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[18] Atul Prakash,et al. Robust Physical-World Attacks on Machine Learning Models , 2017, ArXiv.
[19] David J. C. MacKay,et al. Information Theory, Inference, and Learning Algorithms , 2004, IEEE Transactions on Information Theory.
[20] D. P. Phillips. Neural representation of sound amplitude in the auditory cortex: effects of noise masking , 1990, Behavioural Brain Research.
[21] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[22] Dawn Song,et al. Robust Physical-World Attacks on Deep Learning Models , 2017, 1707.08945.
[23] Can Kanbak. Measuring Robustness of Classifiers to Geometric Transformations , 2017 .
[24] Dawn Xiaodong Song,et al. Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.
[25] Etienne Perot,et al. Deep Reinforcement Learning framework for Autonomous Driving , 2017, Autonomous Vehicles and Machines.
[26] Sandy H. Huang,et al. Adversarial Attacks on Neural Network Policies , 2017, ICLR.
[27] Dejing Dou,et al. HotFlip: White-Box Adversarial Examples for NLP , 2017, ArXiv.
[28] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[29] Dawn Xiaodong Song,et al. Decision Boundary Analysis of Adversarial Examples , 2018, ICLR.
[30] Beilun Wang,et al. A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples , 2016, ICLR 2017.
[31] James Bailey,et al. Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality , 2018, ICLR.
[32] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[33] Dawn Xiaodong Song,et al. Delving into adversarial attacks on deep policies , 2017, ICLR.
[34] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[35] Jürgen Schmidhuber,et al. Multi-column deep neural networks for image classification , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.
[36] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[37] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.
[38] Tao Liu,et al. Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[39] Shin Ishii,et al. Distributional Smoothing with Virtual Adversarial Training , 2015, ICLR 2016.
[40] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[41] Tara N. Sainath,et al. Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups , 2012, IEEE Signal Processing Magazine.
[42] Tsachy Weissman,et al. Order-Optimal Estimation of Functionals of Discrete Distributions , 2014, ArXiv.
[43] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).