Securing the Deep Fraud Detector in Large-Scale E-Commerce Platform via Adversarial Machine Learning Approach
暂无分享,去创建一个
Bo An | Long Zhang | Mengchen Zhao | Zhao Li | Qingyu Guo | Jiaming Huang | Pengrui Hui | Bo An | Mengchen Zhao | Qingyu Guo | Z. Li | Jiaming Huang | Long Zhang | Pengrui Hui
[1] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[2] Jinfeng Yi,et al. EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples , 2017, AAAI.
[3] Jun Li,et al. One-Class Adversarial Nets for Fraud Detection , 2018, AAAI.
[4] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[5] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[6] Andrew Slavin Ross,et al. Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients , 2017, AAAI.
[7] Anton van den Hengel,et al. The treasure beneath convolutional layers: Cross-convolutional-layer pooling for image classification , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[8] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[9] Zhao Li,et al. Online E-Commerce Fraud: A Large-Scale Detection and Analysis , 2018, 2018 IEEE 34th International Conference on Data Engineering (ICDE).
[10] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[11] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[12] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[13] Colin Raffel,et al. Thermometer Encoding: One Hot Way To Resist Adversarial Examples , 2018, ICLR.
[14] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[15] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[16] Ekrem Duman,et al. A profit-driven Artificial Neural Network (ANN) with applications to fraud detection and direct marketing , 2016, Neurocomputing.
[17] Patrick D. McDaniel,et al. On the Effectiveness of Defensive Distillation , 2016, ArXiv.
[18] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[19] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[20] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[21] Adversarial Examples. THERMOMETER ENCODING: ONE HOT WAY TO RESIST , 2017 .
[22] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[23] Pan He,et al. Adversarial Examples: Attacks and Defenses for Deep Learning , 2017, IEEE Transactions on Neural Networks and Learning Systems.
[24] Liqing Zhang,et al. Credit Card Fraud Detection Using Convolutional Neural Networks , 2016, ICONIP.
[25] Patrick D. McDaniel,et al. Adversarial Perturbations Against Deep Neural Networks for Malware Classification , 2016, ArXiv.
[26] Bo An,et al. Impression Allocation for Combating Fraud in E-commerce Via Deep Reinforcement Learning with Action Norm Penalty , 2018, IJCAI.
[27] David L. Olson,et al. Advanced Data Mining Techniques , 2008 .
[28] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[29] Yiqun Liu,et al. Detecting Crowdturfing "Add to Favorites" Activities in Online Shopping , 2018, WWW.
[30] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[31] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[32] Yang Song,et al. Improving the Robustness of Deep Neural Networks via Stability Training , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[33] Dawn Song,et al. Robust Physical-World Attacks on Deep Learning Models , 2017, 1707.08945.
[34] H. Robbins. A Stochastic Approximation Method , 1951 .
[35] Clark W. Barrett,et al. Provably Minimally-Distorted Adversarial Examples , 2017 .