Unsupervised Adversarial Anomaly Detection using One-Class Support Vector Machines
暂无分享,去创建一个
Christopher Leckie | Tansu Alpcan | Sarah M. Erfani | Margreta Kuijper | Prameesha Weerasinghe | T. Alpcan | M. Kuijper | C. Leckie | S. Erfani | P. Weerasinghe
[1] Bhavani M. Thuraisingham,et al. Adversarial support vector machine learning , 2012, KDD.
[2] Blaine Nelson,et al. The security of machine learning , 2010, Machine Learning.
[3] Blaine Nelson,et al. Adversarial machine learning , 2019, AISec '11.
[4] Bernhard Schölkopf,et al. Support Vector Method for Novelty Detection , 1999, NIPS.
[5] W. B. Johnson,et al. Extensions of Lipschitz mappings into Hilbert space , 1984 .
[6] Seyed-Mohsen Moosavi-Dezfooli,et al. Robustness of classifiers: from adversarial to random noise , 2016, NIPS.
[7] Igor Kononenko,et al. Machine learning for medical diagnosis: history, state of the art and perspective , 2001, Artif. Intell. Medicine.
[8] Atul Prakash,et al. Robust Physical-World Attacks on Machine Learning Models , 2017, ArXiv.
[9] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[10] Christopher Leckie,et al. R1SVM: A Randomised Nonlinear Approach to Large-Scale Anomaly Detection , 2015, AAAI.
[11] Benjamin Recht,et al. Random Features for Large-Scale Kernel Machines , 2007, NIPS.