暂无分享,去创建一个
Alejandro Ribeiro | George J. Pappas | Hamed Hassani | Luiz F. O. Chamon | Alexander Robey | Alejandro Ribeiro | Hamed Hassani | Alexander Robey
[1] George J. Pappas,et al. Model-Based Domain Generalization , 2021, NeurIPS.
[2] James Bailey,et al. Improving Adversarial Robustness Requires Revisiting Misclassified Examples , 2020, ICLR.
[3] Shiqi Wang,et al. Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization , 2020, NeurIPS.
[4] J. Frédéric Bonnans,et al. Convex and Stochastic Optimization , 2019, Universitext.
[5] Xiaoqing Han,et al. Review on the research and practice of deep learning and reinforcement learning in smart grids , 2018, CSEE Journal of Power and Energy Systems.
[6] Graham W. Taylor,et al. Improved Regularization of Convolutional Neural Networks with Cutout , 2017, ArXiv.
[7] Yoshua Bengio,et al. A Closer Look at Memorization in Deep Networks , 2017, ICML.
[8] Michael Carl Tschantz,et al. Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination , 2014, ArXiv.
[9] J. Zico Kolter,et al. Learning perturbation sets for robust machine learning , 2020, ICLR.
[10] Gaurav S. Sukhatme,et al. Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic Reinforcement Learning , 2020 .
[11] Matus Telgarsky,et al. Spectrally-normalized margin bounds for neural networks , 2017, NIPS.
[12] Benjamin Pfaff,et al. Perturbation Analysis Of Optimization Problems , 2016 .
[13] Dacheng Tao,et al. Theoretical Analysis of Adversarial Learning: A Minimax Approach , 2018, NeurIPS.
[14] Trevor Darrell,et al. Constrained Convolutional Neural Networks for Weakly Supervised Segmentation , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[15] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[16] Sébastien Bubeck,et al. Convex Optimization: Algorithms and Complexity , 2014, Found. Trends Mach. Learn..
[17] J. Zico Kolter,et al. OptNet: Differentiable Optimization as a Layer in Neural Networks , 2017, ICML.
[18] Vijay Kumar,et al. Approximating Explicit Model Predictive Control Using Constrained Neural Networks , 2018, 2018 Annual American Control Conference (ACC).
[19] Christos Davatzikos,et al. Medical Image Harmonization Using Deep Learning Based Canonical Mapping: Toward Robust and Generalizable Learning in Imaging , 2020, ArXiv.
[20] Sebastian U. Stich,et al. Analysis of SGD with Biased Gradient Estimators , 2020, ArXiv.
[21] Wolfram Burgard,et al. The limits and potentials of deep learning for robotics , 2018, Int. J. Robotics Res..
[22] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[23] Dimitris A. Karras,et al. An efficient constrained training algorithm for feedforward networks , 1995, IEEE Trans. Neural Networks.
[24] Vikas Singh,et al. Constrained Deep Learning using Conditional Gradient and Applications in Computer Vision , 2018, ArXiv.
[25] Ludwig Schmidt,et al. Unlabeled Data Improves Adversarial Robustness , 2019, NeurIPS.
[26] Quoc V. Le,et al. Measuring Invariances in Deep Networks , 2009, NIPS.
[27] Prateek Mittal,et al. RobustBench: a standardized adversarial robustness benchmark , 2020, ArXiv.
[28] Prateek Mittal,et al. PAC-learning in the presence of evasion adversaries , 2018, NIPS 2018.
[29] Uri Shaham,et al. Understanding adversarial training: Increasing local stability of supervised models through robust optimization , 2015, Neurocomputing.
[30] Jan Peters,et al. Reinforcement learning in robotics: A survey , 2013, Int. J. Robotics Res..
[31] D. Song,et al. The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization , 2020, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[32] Sébastien Bubeck,et al. Finite-Time Analysis of Projected Langevin Monte Carlo , 2015, NIPS.
[33] Shai Ben-David,et al. Understanding Machine Learning: From Theory to Algorithms , 2014 .
[34] Alejandro Ribeiro,et al. Constrained Reinforcement Learning Has Zero Duality Gap , 2019, NeurIPS.
[35] Alejandro Ribeiro,et al. Probably Approximately Correct Constrained Learning , 2020, NeurIPS.
[36] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[37] Petri Koistinen,et al. Using additive noise in back-propagation training , 1992, IEEE Trans. Neural Networks.
[38] Pieter Abbeel,et al. Robust Reinforcement Learning using Adversarial Populations , 2020, ArXiv.
[39] Pin-Yu Chen,et al. CAT: Customized Adversarial Training for Improved Robustness , 2020, IJCAI.
[40] Ekin D. Cubuk,et al. Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation , 2019, ArXiv.
[41] Xi Chen,et al. Wasserstein Distributional Robustness and Regularization in Statistical Learning , 2017, ArXiv.
[42] Benjamin Recht,et al. Measuring Robustness to Natural Distribution Shifts in Image Classification , 2020, NeurIPS.
[43] Anirudha Majumdar,et al. Invariant Policy Optimization: Towards Stronger Generalization in Reinforcement Learning , 2020, ArXiv.
[44] Greg Yang,et al. Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers , 2019, NeurIPS.
[45] Takashi Matsubara,et al. Data Augmentation Using Random Image Cropping and Patching for Deep CNNs , 2018, IEEE Transactions on Circuits and Systems for Video Technology.
[46] Andre Esteva,et al. A guide to deep learning in healthcare , 2019, Nature Medicine.
[47] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[48] Yisen Wang,et al. Adversarial Weight Perturbation Helps Robust Generalization , 2020, NeurIPS.
[49] John C. Duchi,et al. Certifying Some Distributional Robustness with Principled Adversarial Training , 2017, ICLR.
[50] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[51] Michael Betancourt,et al. A Conceptual Introduction to Hamiltonian Monte Carlo , 2017, 1701.02434.
[52] Harini Kannan,et al. Adversarial Logit Pairing , 2018, NIPS 2018.
[53] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[54] David Rolnick,et al. DC3: A learning method for optimization with hard constraints , 2021, ICLR.
[55] Edgar Dobriban,et al. A Group-Theoretic Framework for Data Augmentation , 2019, NeurIPS.
[56] Alexander Cloninger,et al. Defending against Adversarial Images using Basis Functions Transformations , 2018, ArXiv.
[57] George J. Pappas,et al. Model-Based Robust Deep Learning , 2020, ArXiv.
[58] Zhao Chen,et al. Gradient Adversarial Training of Neural Networks , 2018, ArXiv.
[59] Xiaohui Kuang,et al. Adaptive iterative attack towards explainable adversarial robustness , 2020, Pattern Recognit..
[60] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[61] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[62] Alexei A. Efros,et al. Unbiased look at dataset bias , 2011, CVPR 2011.
[63] John N. Tsitsiklis,et al. Gradient Convergence in Gradient methods with Errors , 1999, SIAM J. Optim..
[64] Tuo Zhao,et al. Implicit Bias of Gradient Descent based Adversarial Training on Separable Data , 2020, ICLR.
[65] Vladimir N. Vapnik,et al. The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.
[66] Frederick R. Forst,et al. On robust estimation of the location parameter , 1980 .
[67] Ying Xiong. Nonlinear Optimization , 2014 .
[68] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[69] J. Zico Kolter,et al. Fast is better than free: Revisiting adversarial training , 2020, ICLR.
[70] Alexander D'Amour,et al. On Robustness and Transferability of Convolutional Neural Networks , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[71] Nathan Srebro,et al. VC Classes are Adversarially Robustly Learnable, but Only Improperly , 2019, COLT.
[72] B. Ripley,et al. Pattern Recognition , 1968, Nature.
[73] Sandy H. Huang,et al. Adversarial Attacks on Neural Network Policies , 2017, ICLR.
[74] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[75] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[76] Thomas G. Dietterich,et al. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , 2018, ICLR.
[77] Pranjal Awasthi,et al. Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks , 2020, ICML.
[78] Po-Ling Loh,et al. Adversarial Risk Bounds via Function Transformation , 2018 .
[79] Stephen P. Boyd,et al. Differentiable Convex Optimization Layers , 2019, NeurIPS.
[80] D. Dunson,et al. Discontinuous Hamiltonian Monte Carlo for discrete parameters and discontinuous likelihoods , 2017, 1705.08510.
[81] Tengyu Ma,et al. Learning One-hidden-layer Neural Networks with Landscape Design , 2017, ICLR.
[82] Yann LeCun,et al. The mnist database of handwritten digits , 2005 .
[83] Provable tradeoffs in adversarially robust classification , 2020, ArXiv.
[84] Ilias Diakonikolas,et al. Efficiently Learning Adversarially Robust Halfspaces with Noise , 2020, ICML.
[85] Li Yao,et al. A Strong Baseline for Domain Adaptation and Generalization in Medical Imaging , 2019, ArXiv.
[86] Kamyar Azizzadenesheli,et al. Stochastic Activation Pruning for Robust Adversarial Defense , 2018, ICLR.
[87] John Duchi,et al. Understanding and Mitigating the Tradeoff Between Robustness and Accuracy , 2020, ICML.
[88] Ruitong Huang,et al. Max-Margin Adversarial (MMA) Training: Direct Input Space Margin Maximization through Adversarial Training , 2018, ICLR.
[89] Po-Sen Huang,et al. Achieving Robustness in the Wild via Adversarial Mixing With Disentangled Representations , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[90] Larry S. Davis,et al. Adversarial Training for Free! , 2019, NeurIPS.
[91] Sean A. Munson,et al. Unequal Representation and Gender Stereotypes in Image Search Results for Occupations , 2015, CHI.
[92] Rama Chellappa,et al. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models , 2018, ICLR.
[93] Daniel Cremers,et al. Homogeneous Linear Inequality Constraints for Neural Network Activations , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[94] Matthew D. Zeiler. ADADELTA: An Adaptive Learning Rate Method , 2012, ArXiv.
[95] Panos M. Pardalos,et al. Convex optimization theory , 2010, Optim. Methods Softw..
[96] Radford M. Neal. MCMC Using Hamiltonian Dynamics , 2011, 1206.1901.
[97] Anuradha M. Annaswamy,et al. Controls for Smart Grids: Architectures and Applications , 2017, Proceedings of the IEEE.
[98] J. Tukey. A survey of sampling from contaminated distributions , 1960 .
[99] Henry Leung,et al. A Deep and Scalable Unsupervised Machine Learning System for Cyber-Attack Detection in Large-Scale Smart Grids , 2019, IEEE Access.
[100] Yi Yang,et al. Random Erasing Data Augmentation , 2017, AAAI.
[101] Alejandro Ribeiro,et al. The Empirical Duality Gap of Constrained Statistical Learning , 2020, ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[102] Mingyan Liu,et al. Generating Adversarial Examples with Adversarial Networks , 2018, IJCAI.
[103] Moustapha Cissé,et al. Countering Adversarial Images using Input Transformations , 2018, ICLR.
[104] Uri Shaham,et al. Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization , 2015, ArXiv.
[105] Amir Globerson,et al. Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs , 2017, ICML.
[106] Adel Javanmard,et al. Precise Tradeoffs in Adversarial Training for Linear Regression , 2020, COLT.
[107] Aleksander Madry,et al. On Evaluating Adversarial Robustness , 2019, ArXiv.
[108] Nicholas Carlini,et al. Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations , 2020, ICML.
[109] Yang Song,et al. Improving the Robustness of Deep Neural Networks via Stability Training , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[110] Matthias Bethge,et al. A Simple Way to Make Neural Networks Robust Against Diverse Image Corruptions , 2020, ECCV.
[111] Tom Goldstein,et al. Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness , 2020, ICML.
[112] Cyrus Rashtchian,et al. A Closer Look at Accuracy vs. Robustness , 2020, NeurIPS.
[113] Seong Joon Oh,et al. CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[114] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[115] Alexandros G. Dimakis,et al. The Robust Manifold Defense: Adversarial Training using Generative Models , 2017, ArXiv.
[116] Yan Feng,et al. Hilbert-Based Generative Defense for Adversarial Examples , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[117] Kannan Ramchandran,et al. Rademacher Complexity for Adversarially Robust Generalization , 2018, ICML.
[118] Adel Javanmard,et al. Theoretical Insights Into the Optimization Landscape of Over-Parameterized Shallow Neural Networks , 2017, IEEE Transactions on Information Theory.