暂无分享,去创建一个
Yao Zhao | Takuya Akiba | Motoki Abe | Samy Bengio | Jun Zhu | Xiaolin Hu | Alan L. Yuille | Yinpeng Dong | Zhishuai Zhang | Jianyu Wang | Zhou Ren | Ming Liang | Ian J. Goodfellow | Alexey Kurakin | Cihang Xie | Tianyu Pang | Yuzhe Zhao | Junjiajia Long | Sangxia Huang | Zhonglin Han | Seiya Tokui | Fangzhou Liao | Yerkebulan Berdibekov | Cihang Xie | Zhishuai Zhang | Samy Bengio | A. Yuille | Takuya Akiba | Jun Zhu | Seiya Tokui | Ming Liang | Xiaolin Hu | Zhou Ren | Tianyu Pang | Yinpeng Dong | Fangzhou Liao | J. Long | Sangxia Huang | Jianyu Wang | Alexey Kurakin | Zhonglin Han | Yao Zhao | Yuzhe Zhao | Yerkebulan Berdibekov | Motoki Abe | I. Goodfellow | Junjiajia Long
[1] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[2] Kevin Barraclough,et al. I and i , 2001, BMJ : British Medical Journal.
[3] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[4] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[5] Yoshua Bengio,et al. Extracting and composing robust features with denoising autoencoders , 2008, ICML '08.
[6] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[7] Jinfeng Yi,et al. ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models , 2017, AISec@CCS.
[8] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[9] W. Marsden. I and J , 2012 .
[10] Colin Raffel,et al. Thermometer Encoding: One Hot Way To Resist Adversarial Examples , 2018, ICLR.
[11] Jerzy Korczak,et al. Optimization and global minimization methods suitable for neural networks , 1999 .
[12] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[13] G. G. Stokes. "J." , 1890, The New Yale Book of Quotations.
[14] Boris Polyak. Some methods of speeding up the convergence of iteration methods , 1964 .
[15] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[16] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[17] François Chollet,et al. Xception: Deep Learning with Depthwise Separable Convolutions , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[18] Lei Zhang,et al. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising , 2016, IEEE Transactions on Image Processing.
[19] Jian Sun,et al. Identity Mappings in Deep Residual Networks , 2016, ECCV.
[20] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[21] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[22] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[23] Alan L. Yuille,et al. Mitigating adversarial effects through randomization , 2017, ICLR.
[24] Jun Zhu,et al. Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[25] Dale Schuurmans,et al. Learning with a Strong Adversary , 2015, ArXiv.
[26] Ian S. Fischer,et al. Adversarial Transformation Networks: Learning to Generate Adversarial Examples , 2017, ArXiv.
[27] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[28] Dawn Xiaodong Song,et al. Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong , 2017, ArXiv.
[29] Martin Wattenberg,et al. Adversarial Spheres , 2018, ICLR.
[30] Dawn Xiaodong Song,et al. Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.
[31] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[32] Matthias Bethge,et al. Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models , 2017, ICLR.
[33] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[34] Sergey Ioffe,et al. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.
[35] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[36] Geoffrey E. Hinton,et al. On the importance of initialization and momentum in deep learning , 2013, ICML.
[37] Li Chen,et al. Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression , 2017, ArXiv.
[38] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[39] Geoffrey E. Hinton,et al. Matrix capsules with EM routing , 2018, ICLR.
[40] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[41] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[42] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[43] Xiaolin Hu,et al. Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.