Certified Robustness to Label-Flipping Attacks via Randomized Smoothing
暂无分享,去创建一个
[1] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[2] Aapo Hyvärinen,et al. Fast and robust fixed-point algorithms for independent component analysis , 1999, IEEE Trans. Neural Networks.
[3] Simon Haykin,et al. GradientBased Learning Applied to Document Recognition , 2001 .
[4] Sebastian Mika,et al. Kernel Fisher Discriminants , 2003 .
[5] Blaine Nelson,et al. Can machine learning be secure? , 2006, ASIACCS '06.
[6] Ling Huang,et al. ANTIDOTE: understanding and defending against poisoning of anomaly detectors , 2009, IMC '09.
[7] Dieter Fox,et al. Kernel Descriptors for Visual Recognition , 2010, NIPS.
[8] Christopher Potts,et al. Learning Word Vectors for Sentiment Analysis , 2011, ACL.
[9] Blaine Nelson,et al. Support Vector Machines Under Adversarial Label Noise , 2011, ACML.
[10] Claudia Eckert,et al. Adversarial Label Flips Attack on Support Vector Machines , 2012, ECAI.
[11] Sham M. Kakade,et al. Random Design Analysis of Ridge Regression , 2012, COLT.
[12] Nagarajan Natarajan,et al. Learning with Noisy Labels , 2013, NIPS.
[13] Marc'Aurelio Ranzato,et al. Building high-level features using large scale unsupervised learning , 2011, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
[14] Shie Mannor,et al. Robust Sparse Regression under Adversarial Corruption , 2013, ICML.
[15] Yoshua Bengio,et al. How transferable are features in deep neural networks? , 2014, NIPS.
[16] Fabio Roli,et al. Security Evaluation of Pattern Classifiers under Attack , 2014, IEEE Transactions on Knowledge and Data Engineering.
[17] Trevor Darrell,et al. DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition , 2013, ICML.
[18] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[19] Lee H. Dicker,et al. Variance estimation in high-dimensional linear models , 2014 .
[20] Prateek Jain,et al. Robust Regression via Hard Thresholding , 2015, NIPS.
[21] Claudia Eckert,et al. Is Feature Selection Secure against Training Data Poisoning? , 2015, ICML.
[22] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[23] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[25] Dacheng Tao,et al. Classification with Noisy Labels by Importance Reweighting , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[26] Michael P. Wellman,et al. Towards the Science of Security and Privacy in Machine Learning , 2016, ArXiv.
[27] Fabio Roli,et al. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization , 2017, AISec@CCS.
[28] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[29] Yiran Chen,et al. Generative Poisoning Attack Method Against Neural Networks , 2017, ArXiv.
[30] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.
[31] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[32] Chang Liu,et al. Robust Linear Regression Against Training Data Poisoning , 2017, AISec@CCS.
[33] Richard Nock,et al. Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[34] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[35] Dawn Xiaodong Song,et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.
[36] Percy Liang,et al. Certified Defenses for Data Poisoning Attacks , 2017, NIPS.
[37] Matthijs Douze,et al. Deep Clustering for Unsupervised Learning of Visual Features , 2018, ECCV.
[38] Michael P. Wellman,et al. SoK: Security and Privacy in Machine Learning , 2018, 2018 IEEE European Symposium on Security and Privacy (EuroS&P).
[39] Ioannis Ch. Paschalidis,et al. A Robust Learning Approach for Regression Models Based on Distributionally Robust Optimization , 2018, J. Mach. Learn. Res..
[40] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[41] Jerry Li,et al. Spectral Signatures in Backdoor Attacks , 2018, NeurIPS.
[42] Pradeep Ravikumar,et al. Connecting Optimization and Regularization Paths , 2018, NeurIPS.
[43] Luis Muñoz-González,et al. Label Sanitization against Label Flipping Poisoning Attacks , 2018, Nemesis/UrbReas/SoGood/IWAISe/GDM@PKDD/ECML.
[44] Pradeep Ravikumar,et al. Representer Point Selection for Explaining Deep Neural Networks , 2018, NeurIPS.
[45] Pravesh Kothari,et al. Efficient Algorithms for Outlier-Robust Regression , 2018, COLT.
[46] Tudor Dumitras,et al. Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks , 2018, NeurIPS.
[47] Sivaraman Balakrishnan,et al. Robust estimation via robust gradient estimation , 2018, Journal of the Royal Statistical Society: Series B (Statistical Methodology).
[48] Lawrence Carin,et al. Certified Adversarial Robustness with Additive Gaussian Noise , 2018, NeurIPS 2019.
[49] Suman Jana,et al. Certified Robustness to Adversarial Examples with Differential Privacy , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[50] Tommi S. Jaakkola,et al. Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers , 2019, NeurIPS.
[51] Greg Yang,et al. Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers , 2019, NeurIPS.
[52] Yanyao Shen,et al. Learning with Bad Training Data via Iterative Trimmed Loss Minimization , 2018, ICML.
[53] Ilias Diakonikolas,et al. Efficient Algorithms and Lower Bounds for Robust Linear Regression , 2018, SODA.
[54] Tom Goldstein,et al. Transferable Clean-Label Poisoning Attacks on Deep Neural Nets , 2019, ICML.
[55] Eric Price,et al. Compressed Sensing with Adversarial Sparse Noise via L1 Regression , 2018, SOSA.
[56] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[57] Geoffrey E. Hinton,et al. A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.
[58] 俊一 甘利. 5分で分かる!? 有名論文ナナメ読み:Jacot, Arthor, Gabriel, Franck and Hongler, Clement : Neural Tangent Kernel : Convergence and Generalization in Neural Networks , 2020 .
[59] Pushmeet Kohli,et al. A Framework for robustness Certification of Smoothed Classifiers using F-Divergences , 2020, ICLR.
[60] Mauro Conti,et al. On defending against label flipping attacks on malware detection systems , 2019, Neural Computing and Applications.
[61] Percy Liang,et al. Stronger data poisoning attacks break data sanitization defenses , 2018, Machine Learning.