Adversarial Robustness for Tabular Data through Cost and Utility Awareness
暂无分享,去创建一个
[1] Nicolas Flammarion,et al. On the effectiveness of adversarial training against common corruptions , 2021, UAI.
[2] Francesco Cartella,et al. Adversarial Attacks for Tabular Data: Application to Fraud Detection and Imbalanced Data , 2021, SafeAI@AAAI.
[3] Sicco Verwer,et al. Efficient Training of Robust Decision Trees Against Adversarial Examples , 2020, ICML.
[4] Haifeng Xu,et al. PAC-Learning for Strategic Classification , 2020, ICML.
[5] Yves Le Traon,et al. Search-based adversarial testing and improvement of constrained credit scoring systems , 2020, ESEC/SIGSOFT FSE.
[6] A. Shabtai,et al. Not All Datasets Are Born Equal: On Heterogeneous Data and Adversarial Examples , 2020, ArXiv.
[7] Michel Barlaud,et al. Efficient Projection Algorithms onto the Weighted l1 Ball , 2020, ArXiv.
[8] Fenglong Ma,et al. Attackability Characterization of Adversarial Evasion Attack on Discrete Data , 2020, KDD.
[9] Sahil Singla,et al. Perceptual Adversarial Robustness: Defense Against Unseen Threat Models , 2020, ICLR.
[10] Kun Zhang,et al. A Causal View on Robustness of Neural Networks , 2020, NeurIPS.
[11] Quan Z. Sheng,et al. Adversarial Attacks on Deep-learning Models in Natural Language Processing , 2020, ACM Trans. Intell. Syst. Technol..
[12] B. Schneier,et al. Politics of Adversarial Machine Learning , 2020, SSRN Electronic Journal.
[13] J. Zico Kolter,et al. Fast is better than free: Revisiting adversarial training , 2020, ICLR.
[14] Yizheng Chen,et al. Cost-Aware Robust Tree Ensembles for Security Applications , 2019, USENIX Security Symposium.
[15] P. Frossard,et al. Imperceptible Adversarial Attacks on Tabular Data , 2019, ArXiv.
[16] Feargus Pendlebury,et al. Intriguing Properties of Adversarial ML Attacks in the Problem Space , 2019, 2020 IEEE Symposium on Security and Privacy (SP).
[17] Moritz Hardt,et al. Strategic Classification is Causal Modeling in Disguise , 2019, ICML.
[18] Boxin Wang,et al. AdvCodec: Towards A Unified Framework for Adversarial Text Generation , 2019, ArXiv.
[19] J. Zico Kolter,et al. Adversarial Robustness Against the Union of Multiple Perturbation Models , 2019, ICML.
[20] Sercan O. Arik,et al. TabNet: Attentive Interpretable Tabular Learning , 2019, AAAI.
[21] Claudio Lucchese,et al. Treant: training evasion-aware decision trees , 2019, Data Mining and Knowledge Discovery.
[22] Matthias Hein,et al. Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks , 2019, NeurIPS.
[23] Larry S. Davis,et al. Adversarial Training for Free! , 2019, NeurIPS.
[24] Cho-Jui Hsieh,et al. Robust Decision Trees Against Adversarial Examples , 2019 .
[25] J. Zico Kolter,et al. Wasserstein Adversarial Examples via Projected Sinkhorn Iterations , 2019, ICML.
[26] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[27] Alexandros G. Dimakis,et al. Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification , 2018, MLSys.
[28] Carmela Troncoso,et al. Evading classifiers in discrete domains with provable optimality guarantees , 2018, ArXiv.
[29] Xiao Zhang,et al. Cost-Sensitive Robustness against Adversarial Examples , 2018, ICLR.
[30] Anca D. Dragan,et al. The Social Cost of Strategic Classification , 2018, FAT.
[31] Carmela Troncoso,et al. POTs: protective optimization technologies , 2018, FAT*.
[32] Michael I. Jordan,et al. Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data , 2018, J. Mach. Learn. Res..
[33] J. Zico Kolter,et al. Scaling provable adversarial defenses , 2018, NeurIPS.
[34] Jinyuan Jia,et al. AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning , 2018, USENIX Security Symposium.
[35] Claudia Eckert,et al. Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables , 2018, 2018 26th European Signal Processing Conference (EUSIPCO).
[36] Lujo Bauer,et al. On the Suitability of Lp-Norms for Creating and Preventing Adversarial Examples , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[37] Prateek Mittal,et al. DARTS: Deceiving Autonomous Cars with Toxic Signs , 2018, ArXiv.
[38] Dejing Dou,et al. HotFlip: White-Box Adversarial Examples for Text Classification , 2017, ACL.
[39] Fabio Roli,et al. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2017, Pattern Recognit..
[40] Aaron Roth,et al. Strategic Classification from Revealed Preferences , 2017, EC.
[41] Jon Crowcroft,et al. Classification of Twitter Accounts into Automated Agents and Human Users , 2017, 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM).
[42] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[43] Fabio Roli,et al. Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection , 2017, IEEE Transactions on Dependable and Secure Computing.
[44] Xirong Li,et al. Deep Text Classification Can be Fooled , 2017, IJCAI.
[45] Patrick D. McDaniel,et al. On the (Statistical) Detection of Adversarial Examples , 2017, ArXiv.
[46] Michael P. Wellman,et al. Towards the Science of Security and Privacy in Machine Learning , 2016, ArXiv.
[47] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[48] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[49] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[50] J. Doug Tygar,et al. Evasion and Hardening of Tree Ensemble Classifiers , 2015, ICML.
[51] Christos H. Papadimitriou,et al. Strategic Classification , 2015, ITCS.
[52] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[53] Rami Puzis,et al. Potential-based bounded-cost search and Anytime Non-Parametric A* , 2014, Artif. Intell..
[54] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[55] Rami Puzis,et al. Potential Search: A Bounded-Cost Search Algorithm , 2011, ICAPS.
[56] Sergios Theodoridis,et al. Adaptive algorithm for sparse system identification using projections onto weighted ℓ1 balls , 2010, 2010 IEEE International Conference on Acoustics, Speech and Signal Processing.
[57] Blaine Nelson,et al. Can machine learning be secure? , 2006, ASIACCS '06.
[58] Christopher Meek,et al. Adversarial learning , 2005, KDD '05.
[59] Samir Khuller,et al. The Budgeted Maximum Coverage Problem , 1999, Inf. Process. Lett..
[60] Richard E. Korf,et al. Iterative-Deepening-A*: An Optimal Admissible Tree Search , 1985, IJCAI.
[61] Rina Dechter,et al. Generalized best-first search strategies and the optimality of A* , 1985, JACM.
[62] L. Wolsey. Maximising Real-Valued Submodular Functions: Primal and Dual Heuristics for Location Problems , 1982, Math. Oper. Res..
[63] J. Gower. A General Coefficient of Similarity and Some of Its Properties , 1971 .
[64] Nils J. Nilsson,et al. A Formal Basis for the Heuristic Determination of Minimum Cost Paths , 1968, IEEE Trans. Syst. Sci. Cybern..
[65] Bernhard Schölkopf,et al. Adversarial Robustness through the Lens of Causality , 2022, ICLR.
[66] Mario Polino,et al. Evasion Attacks against Banking Fraud Detection Systems , 2020, RAID.
[67] Ira Pohl,et al. Heuristic Search Viewed as Path Finding in a Graph , 1970, Artif. Intell..