暂无分享,去创建一个
[1] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[2] Li Chen,et al. Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression , 2017, ArXiv.
[3] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[4] Xiao Wang,et al. Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses , 2019, IJCAI.
[5] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[6] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[7] Guneet Singh Dhillon,et al. TOCHASTIC ACTIVATION PRUNING FOR ROBUST ADVERSARIAL DEFENSE , 2018 .
[8] Xiao Wang,et al. Defensive dropout for hardening deep neural networks under adversarial attacks , 2018, ICCAD.
[9] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[10] Yanzhi Wang,et al. Fault Sneaking A ack : a Stealthy Framework for Misleading Deep Neural Networks , 2019 .
[11] Xin Zhang,et al. End to End Learning for Self-Driving Cars , 2016, ArXiv.
[12] Yanzhi Wang,et al. An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks , 2018, ACM Multimedia.
[13] Ajmal Mian,et al. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.
[14] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[15] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[16] Song Han,et al. EIE: Efficient Inference Engine on Compressed Deep Neural Network , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).
[17] Xiao Wang,et al. Defending DNN Adversarial Attacks with Pruning and Logits Augmentation , 2018, 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP).
[18] Kamyar Azizzadenesheli,et al. Stochastic Activation Pruning for Robust Adversarial Defense , 2018, ICLR.
[19] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[20] Patrick D. McDaniel,et al. On the (Statistical) Detection of Adversarial Examples , 2017, ArXiv.
[21] Ralph Etienne-Cummings,et al. Using Deep Learning to Extract Scenery Information in Real Time Spatiotemporal Compressed Sensing , 2018, 2018 IEEE International Symposium on Circuits and Systems (ISCAS).
[22] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[23] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[24] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[25] Andrew Zisserman,et al. Deep Face Recognition , 2015, BMVC.
[26] Wenyao Xu,et al. E-RNN: Design Optimization for Efficient Recurrent Neural Networks in FPGAs , 2018, 2019 IEEE International Symposium on High Performance Computer Architecture (HPCA).
[27] Trevor Darrell,et al. Caffe: Convolutional Architecture for Fast Feature Embedding , 2014, ACM Multimedia.