暂无分享,去创建一个
Trevor Darrell | Evan Shelhamer | David Wagner | Dequan Wang | An Ju | Trevor Darrell | Evan Shelhamer | David A. Wagner | Dequan Wang | An Ju
[1] Matthias Bethge,et al. Improving robustness against common corruptions by covariate shift adaptation , 2020, NeurIPS.
[2] Nic Ford,et al. Adversarial Examples Are a Natural Consequence of Test Error in Noise , 2019, ICML.
[3] Geoffrey E. Hinton,et al. Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer , 2017, ICLR.
[4] Xin Wang,et al. SkipNet: Learning Dynamic Routing in Convolutional Networks , 2017, ECCV.
[5] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[6] Serge J. Belongie,et al. Convolutional Networks with Adaptive Inference Graphs , 2017, International Journal of Computer Vision.
[7] Alex Graves,et al. Adaptive Computation Time for Recurrent Neural Networks , 2016, ArXiv.
[8] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[9] Pin-Yu Chen,et al. Attacking the Madry Defense Model with L1-based Adversarial Examples , 2017, ICLR.
[10] Yang Song,et al. PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples , 2017, ICLR.
[11] Ian J. Goodfellow,et al. A Research Agenda: Dynamic Models to Defend Against Correlated Attacks , 2019, ArXiv.
[12] J. Zico Kolter,et al. Overfitting in adversarially robust deep learning , 2020, ICML.
[13] Bernt Schiele,et al. Disentangling Adversarial Robustness and Generalization , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[15] Vladlen Koltun,et al. Deep Equilibrium Models , 2019, NeurIPS.
[16] Quoc V. Le,et al. CondConv: Conditionally Parameterized Convolutions for Efficient Inference , 2019, NeurIPS.
[17] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[18] Alexei A. Efros,et al. Test-Time Training for Out-of-Distribution Generalization , 2019, ArXiv.
[19] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[20] J. Zico Kolter,et al. Fast is better than free: Revisiting adversarial training , 2020, ICLR.
[21] Trevor Darrell,et al. Blurring the Line Between Structure and Learning to Optimize and Adapt Receptive Fields , 2019, ArXiv.
[22] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[23] Cho-Jui Hsieh,et al. Towards Stable and Efficient Training of Verifiably Robust Neural Networks , 2019, ICLR.
[24] Jiashi Feng,et al. Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation , 2020, ICML.
[25] Matthias Hein,et al. Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack , 2019, ICML.
[26] Pan He,et al. Adversarial Examples: Attacks and Defenses for Deep Learning , 2017, IEEE Transactions on Neural Networks and Learning Systems.
[27] Trevor Darrell,et al. Fully Test-time Adaptation by Entropy Minimization , 2020, ArXiv.
[28] Moustapha Cissé,et al. Countering Adversarial Images using Input Transformations , 2018, ICLR.
[29] David Duvenaud,et al. Neural Ordinary Differential Equations , 2018, NeurIPS.
[30] Taesung Park,et al. CyCADA: Cycle-Consistent Adversarial Domain Adaptation , 2017, ICML.
[31] Ludwig Schmidt,et al. Unlabeled Data Improves Adversarial Robustness , 2019, NeurIPS.
[32] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[33] Ran He,et al. Source Data-Absent Unsupervised Domain Adaptation Through Hypothesis Transfer and Labeling Transfer , 2020, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[34] W. Brendel,et al. Foolbox: A Python toolbox to benchmark the robustness of machine learning models , 2017 .
[35] Stella X. Yu,et al. Open Compound Domain Adaptation , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[36] Kamyar Azizzadenesheli,et al. Stochastic Activation Pruning for Robust Adversarial Defense , 2018, ICLR.
[37] Xuancheng Ren,et al. An Adaptive and Momental Bound Method for Stochastic Learning , 2019, ArXiv.
[38] Luiz Eduardo Soares de Oliveira,et al. Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[39] Thomas G. Dietterich,et al. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , 2018, ICLR.
[40] Aaron C. Courville,et al. FiLM: Visual Reasoning with a General Conditioning Layer , 2017, AAAI.
[41] Yisen Wang,et al. Adversarial Weight Perturbation Helps Robust Generalization , 2020, NeurIPS.
[42] Trevor Darrell,et al. Adapting Visual Category Models to New Domains , 2010, ECCV.
[43] Edward Raff,et al. Barrage of Random Transforms for Adversarially Robust Defense , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[44] Ruitong Huang,et al. Max-Margin Adversarial (MMA) Training: Direct Input Space Margin Maximization through Adversarial Training , 2018, ICLR.
[45] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[46] Matthias Hein,et al. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks , 2020, ICML.
[47] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[48] Sang Joon Kim,et al. A Mathematical Theory of Communication , 2006 .
[49] Gal Mishne,et al. Online Adversarial Purification based on Self-supervised Learning , 2021, ICLR.
[50] Dan Boneh,et al. Adversarial Training and Robustness for Multiple Perturbations , 2019, NeurIPS.
[51] Song-Chun Zhu,et al. Stochastic Security: Adversarial Defense Using Long-Run Dynamics of Energy-Based Models , 2020, ICLR.
[52] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[53] Neil D. Lawrence,et al. Dataset Shift in Machine Learning , 2009 .
[54] Kun Xu,et al. Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks , 2020, ICLR.
[55] Rama Chellappa,et al. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models , 2018, ICLR.
[56] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[57] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[58] Saining Xie,et al. On Network Design Spaces for Visual Recognition , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[59] RussakovskyOlga,et al. ImageNet Large Scale Visual Recognition Challenge , 2015 .
[60] Nicolas Flammarion,et al. Square Attack: a query-efficient black-box adversarial attack via random search , 2020, ECCV.
[61] Prateek Mittal,et al. RobustBench: a standardized adversarial robustness benchmark , 2020, ArXiv.
[62] Aleksander Madry,et al. On Adaptive Attacks to Adversarial Example Defenses , 2020, NeurIPS.
[63] Jinfeng Yi,et al. Is Robustness the Cost of Accuracy? - A Comprehensive Study on the Robustness of 18 Deep Image Classification Models , 2018, ECCV.
[64] Kaiming He,et al. Designing Network Design Spaces , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[65] Anh Nguyen-Tuong,et al. Effectiveness of Moving Target Defenses , 2011, Moving Target Defense.