暂无分享,去创建一个
Wenbo Guo | Dawn Song | Min Du | Lun Wang | Xinyu Xing | D. Song | Xinyu Xing | Wenbo Guo | Lun Wang | Min Du
[1] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[2] Johannes Stallkamp,et al. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition , 2012, Neural Networks.
[3] Christophe Ley,et al. Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median , 2013 .
[4] Xiaogang Wang,et al. Deep Learning Face Representation by Joint Identification-Verification , 2014, NIPS.
[5] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[6] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[7] Geoffrey E. Hinton,et al. Deep Learning , 2015, Nature.
[8] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[9] Xiaogang Wang,et al. DeepID3: Face Recognition with Very Deep Neural Networks , 2015, ArXiv.
[10] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[11] Jianxiong Xiao,et al. DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[12] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[13] Kenji Kawaguchi,et al. Deep Learning without Poor Local Minima , 2016, NIPS.
[14] Xin Zhang,et al. End to End Learning for Self-Driving Cars , 2016, ArXiv.
[15] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[16] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[17] Gang Hua,et al. Labeled Faces in the Wild: A Survey , 2016 .
[18] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[19] Fan Zhang,et al. Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.
[20] Brendan Dolan-Gavitt,et al. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.
[21] Chang Liu,et al. Robust Linear Regression Against Training Data Poisoning , 2017, AISec@CCS.
[22] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[23] Ankur Srivastava,et al. Neural Trojans , 2017, 2017 IEEE International Conference on Computer Design (ICCD).
[24] Yarin Gal,et al. Real Time Image Saliency for Black Box Classifiers , 2017, NIPS.
[25] Gregory Valiant,et al. Learning from untrusted data , 2016, STOC.
[26] Dawn Xiaodong Song,et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.
[27] Dawn Song,et al. Robust Physical-World Attacks on Deep Learning Models , 2017, 1707.08945.
[28] Percy Liang,et al. Certified Defenses for Data Poisoning Attacks , 2017, NIPS.
[29] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[30] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[31] Wen-Chuan Lee,et al. Trojaning Attack on Neural Networks , 2018, NDSS.
[32] Tommi S. Jaakkola,et al. Towards Robust Interpretability with Self-Explaining Neural Networks , 2018, NeurIPS.
[33] Wenbo Guo,et al. Explaining Deep Learning Models - A Bayesian Non-parametric Approach , 2018, NeurIPS.
[34] Jerry Li,et al. Spectral Signatures in Backdoor Attacks , 2018, NeurIPS.
[35] Brendan Dolan-Gavitt,et al. Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks , 2018, RAID.
[36] Benny Pinkas,et al. Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring , 2018, USENIX Security Symposium.
[37] Gang Wang,et al. LEMNA: Explaining Deep Learning based Security Applications , 2018, CCS.
[38] Dan Boneh,et al. SentiNet: Detecting Physical Attacks Against Deep Learning Systems , 2018, ArXiv.
[39] Ting Wang,et al. TextBugger: Generating Adversarial Text Against Real-world Applications , 2018, NDSS.
[40] Ben Y. Zhao,et al. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks , 2019, 2019 IEEE Symposium on Security and Privacy (SP).
[41] Benjamin Edwards,et al. Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering , 2018, SafeAI@AAAI.
[42] Damith Chinthana Ranasinghe,et al. STRIP: a defence against trojan attacks on deep neural networks , 2019, ACSAC.
[43] Wen-Chuan Lee,et al. NIC: Detecting Adversarial Samples with Neural Network Invariant Checking , 2019, NDSS.
[44] Shi Feng,et al. Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation , 2019, ICML.
[45] Junfeng Yang,et al. DeepXplore , 2019, Commun. ACM.
[46] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[47] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[48] Florian Tramèr,et al. SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems , 2018, 2020 IEEE Security and Privacy Workshops (SPW).