Machine Learning – The Results Are Not the only Thing that Matters! What About Security, Explainability and Fairness?
暂无分享,去创建一个
Marek Pawlicki | Damian Puchalski | Rafał Kozik | Michał Choraś | M. Choraś | M. Pawlicki | R. Kozik | Damian Puchalski
[1] Kristian Lum,et al. A statistical framework for fair predictive algorithms , 2016, ArXiv.
[2] Jaime S. Cardoso,et al. Towards Complementary Explanations Using Deep Neural Networks , 2018, MLCN/DLF/iMIMIC@MICCAI.
[3] Marko Bohanec,et al. Perturbation-Based Explanations of Prediction Models , 2018, Human and Machine Learning.
[4] Nathan Srebro,et al. Learning Non-Discriminatory Predictors , 2017, COLT.
[5] John Langford,et al. A Reductions Approach to Fair Classification , 2018, ICML.
[6] Milo Honegger,et al. Shedding Light on Black Box Machine Learning Algorithms: Development of an Axiomatic Framework to Assess the Quality of Methods that Explain Individual Predictions , 2018, ArXiv.
[7] Nathan Srebro,et al. Equality of Opportunity in Supervised Learning , 2016, NIPS.
[8] Yiran Chen,et al. Generative Poisoning Attack Method Against Neural Networks , 2017, ArXiv.
[9] Andrew D. Selbst,et al. Big Data's Disparate Impact , 2016 .
[10] Fabio Roli,et al. Pattern Recognition Systems under Attack: Design Issues and Research Challenges , 2014, Int. J. Pattern Recognit. Artif. Intell..
[11] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[12] Dawn Xiaodong Song,et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.
[13] Tudor Dumitras,et al. Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks , 2018, NeurIPS.
[14] Carola-Bibiane Schönlieb,et al. On the Connection Between Adversarial Robustness and Saliency Map Interpretability , 2019, ICML.
[15] Pratik Gajane,et al. On formalizing fairness in prediction with machine learning , 2017, ArXiv.
[16] Blaine Nelson,et al. Exploiting Machine Learning to Subvert Your Spam Filter , 2008, LEET.
[17] Xiaojin Zhu,et al. Using Machine Teaching to Identify Optimal Training-Set Attacks on Machine Learners , 2015, AAAI.
[18] Debdeep Mukhopadhyay,et al. Adversarial Attacks and Defences: A Survey , 2018, ArXiv.
[19] Michal Choras,et al. The Feasibility of Deep Learning Use for Adversarial Model Extraction in the Cybersecurity Domain , 2019, IDEAL.
[20] Krishna P. Gummadi,et al. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment , 2016, WWW.
[21] Julia Rubin,et al. Fairness Definitions Explained , 2018, 2018 IEEE/ACM International Workshop on Software Fairness (FairWare).
[22] Eirini Ntoutsi,et al. Dealing with Bias via Data Augmentation in Supervised Learning Scenarios , 2018 .
[23] Toniann Pitassi,et al. Learning Fair Representations , 2013, ICML.
[24] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[25] Yunfeng Zhang,et al. Data Augmentation for Discrimination Prevention and Bias Disambiguation , 2020, AIES.
[26] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[27] Fabio Roli,et al. Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks , 2018, USENIX Security Symposium.
[28] Krishna P. Gummadi,et al. Fairness Constraints: Mechanisms for Fair Classification , 2015, AISTATS.
[29] Fabio Roli,et al. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization , 2017, AISec@CCS.
[30] Ankur Taly,et al. Explainable machine learning in deployment , 2019, FAT*.
[31] Max Welling,et al. The Variational Fair Autoencoder , 2015, ICLR.
[32] Jaime S. Cardoso,et al. Machine Learning Interpretability: A Survey on Methods and Metrics , 2019, Electronics.
[33] Kush R. Varshney,et al. Optimized Pre-Processing for Discrimination Prevention , 2017, NIPS.
[34] Filip Karlo Dosilovic,et al. Explainable artificial intelligence: A survey , 2018, 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO).
[35] Jean-Michel Loubes,et al. Obtaining Fairness using Optimal Transport Theory , 2018, ICML.
[36] Yongji Wang,et al. Secure Machine Learning, a Brief Overview , 2011, 2011 Fifth International Conference on Secure Software Integration and Reliability Improvement - Companion.
[37] Michael P. Wellman,et al. SoK: Security and Privacy in Machine Learning , 2018, 2018 IEEE European Symposium on Security and Privacy (EuroS&P).
[38] Claudia Eckert,et al. Is Feature Selection Secure against Training Data Poisoning? , 2015, ICML.