Adversarial Risk via Optimal Transport and Optimal Couplings
暂无分享,去创建一个
[1] Xi Chen,et al. Wasserstein Distributional Robustness and Regularization in Statistical Learning , 2017, ArXiv.
[2] Po-Ling Loh,et al. Adversarial Risk Bounds via Function Transformation , 2018 .
[3] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[4] Daniel Cullina,et al. Lower Bounds on Adversarial Robustness from Optimal Transport , 2019, NeurIPS.
[5] Yishay Mansour,et al. Improved generalization bounds for robust learning , 2018, ALT.
[6] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[7] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[8] A. Kleywegt,et al. Distributionally Robust Stochastic Optimization with Wasserstein Distance , 2016, Math. Oper. Res..
[9] Prateek Mittal,et al. PAC-learning in the presence of adversaries , 2018, NeurIPS.
[10] Jinfeng Yi,et al. Is Robustness the Cost of Accuracy? - A Comprehensive Study on the Robustness of 18 Deep Image Classification Models , 2018, ECCV.
[11] C. Villani. Topics in Optimal Transportation , 2003 .
[12] Daniel Kuhn,et al. Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations , 2015, Mathematical Programming.
[13] Xi Chen,et al. Wasserstein Distributionally Robust Optimization and Variation Regularization , 2017, Operations Research.
[14] Moustapha Cissé,et al. Parseval Networks: Improving Robustness to Adversarial Examples , 2017, ICML.
[15] Saeed Mahloujifar,et al. The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure , 2018, AAAI.
[16] Asuka Takatsu. Wasserstein geometry of Gaussian measures , 2011 .
[17] Saeed Mahloujifar,et al. Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution , 2018, NeurIPS.
[18] Matthias Hein,et al. Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation , 2017, NIPS.
[19] Petri Koistinen,et al. Using additive noise in back-propagation training , 1992, IEEE Trans. Neural Networks.
[20] Banghua Zhu,et al. Generalized Resilience and Robust Statistics , 2019, The Annals of Statistics.
[21] David Wozabal,et al. Robustifying Convex Risk Measures for Linear Portfolios: A Nonparametric Approach , 2014, Oper. Res..
[22] Martin Wattenberg,et al. Adversarial Spheres , 2018, ICLR.
[23] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[24] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[25] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[26] Saeed Mahloujifar,et al. Lower Bounds for Adversarially Robust PAC Learning under Evasion and Hybrid Attacks , 2019, 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA).
[27] Po-Ling Loh,et al. Adversarial Risk Bounds for Binary Classification via Function Transformation , 2018, ArXiv.
[28] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[29] Aleksander Madry,et al. Adversarial Examples Are Not Bugs, They Are Features , 2019, NeurIPS.
[30] Gabriel Peyré,et al. Computational Optimal Transport , 2018, Found. Trends Mach. Learn..
[31] Cho-Jui Hsieh,et al. Efficient Neural Network Robustness Certification with General Activation Functions , 2018, NeurIPS.
[32] François-Xavier Vialard,et al. Scaling algorithms for unbalanced optimal transport problems , 2017, Math. Comput..
[33] Varun Kanade,et al. On the Hardness of Robust Classification , 2019, Electron. Colloquium Comput. Complex..
[34] Tom Goldstein,et al. Are adversarial examples inevitable? , 2018, ICLR.
[35] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[36] C. Givens,et al. A class of Wasserstein metrics for probability distributions. , 1984 .
[37] Varun Jog,et al. Generalization error bounds using Wasserstein distances , 2018, 2018 IEEE Information Theory Workshop (ITW).
[38] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[39] Cyrus Rashtchian,et al. Robustness for Non-Parametric Classification: A Generic Attack and Defense , 2020, AISTATS.
[40] M. KarthyekRajhaaA.,et al. Robust Wasserstein profile inference and applications to machine learning , 2019, J. Appl. Probab..
[41] Varun Jog,et al. Reverse Lebesgue and Gaussian isoperimetric inequalities for parallel sets with applications , 2020, ArXiv.
[42] Kannan Ramchandran,et al. Rademacher Complexity for Adversarially Robust Generalization , 2018, ICML.
[43] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[44] Lei Yu,et al. Asymptotics for Strassen's Optimal Transport Problem , 2019, ArXiv.
[45] Jinfeng Yi,et al. Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach , 2018, ICLR.
[46] Hamza Fawzi,et al. Adversarial vulnerability for any classifier , 2018, NeurIPS.
[47] Seyed-Mohsen Moosavi-Dezfooli,et al. Robustness via Curvature Regularization, and Vice Versa , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[48] John C. Duchi,et al. Certifying Some Distributional Robustness with Principled Adversarial Training , 2017, ICLR.