Bridging the Transparency Gap: What Can Explainable AI Learn from the AI Act?
暂无分享,去创建一个
[1] Agathe Balayn,et al. Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK , 2023, FAccT.
[2] J. Ser,et al. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence , 2023, Inf. Fusion.
[3] Tim Miller. Explainable AI is Dead, Long Live Explainable AI!: Hypothesis-driven Decision Support using Evaluative AI , 2023, FAccT.
[4] Shay B. Cohen,et al. Causal Explanations for Sequential Decision-Making in Multi-Agent Systems , 2023, AAMAS.
[5] Sandra Wachter,et al. Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk , 2023, Regulation & Governance.
[6] Esther Keymolen,et al. Explanation and Agency: exploring the normative-epistemic landscape of the “Right to Explanation” , 2022, Ethics and Information Technology.
[7] T. Seidel,et al. Towards Human-centered Explainable AI: User Studies for Model Explanations , 2022, ArXiv.
[8] Finale Doshi-Velez,et al. Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI , 2022, HCOMP.
[9] S. Saralajew,et al. A Human-Centric Assessment Framework for AI , 2022, ArXiv.
[10] Jakob Schoeffer,et al. “There Is Not Enough Information”: On the Effects of Explanations on Perceptions of Informational Fairness and Trustworthiness in Automated Decision-Making , 2022, FAccT.
[11] M. Palmirani,et al. Metrics, Explainability and the European AI Act Proposal , 2022, J.
[12] Kaley J. Rittichier,et al. Trustworthy Artificial Intelligence: A Review , 2022, ACM Comput. Surv..
[13] L. Chen,et al. CPKD: Concepts-Prober-Guided Knowledge Distillation for Fine-Grained CNN Explanation , 2021, 2021 2nd International Conference on Electronics, Communications and Information Technology (CECIT).
[14] M. Cannarsa. Ethics Guidelines for Trustworthy AI , 2021, The Cambridge Handbook of Lawyering in the Digital Age.
[15] N. Jennings,et al. Trustworthy human-AI partnerships , 2021, iScience.
[16] Plamen P. Angelov,et al. Explainable artificial intelligence: an analytical review , 2021, WIREs Data Mining Knowl. Discov..
[17] Michael Veale,et al. Demystifying the Draft EU Artificial Intelligence Act , 2021, ArXiv.
[18] Bodhisattwa Prasad Majumder,et al. Knowledge-Grounded Self-Rationalization via Extractive and Natural Language Explanations , 2021, ICML.
[19] Gesina Schwalbe,et al. A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and concepts , 2021, Data Mining and Knowledge Discovery.
[20] Marcin Detyniecki,et al. Understanding Prediction Discrepancies in Machine Learning Classifiers , 2021, ArXiv.
[21] C. Rudin,et al. Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges , 2021, Statistics Surveys.
[22] Emre Bayamlıoğlu. The right to contest automated decisions under the General Data Protection Regulation : Beyond the so‐called “right to explanation” , 2021 .
[23] Michael Winikoff,et al. Artificial Intelligence and the Right to Explanation as a Human Right , 2021, IEEE Internet Computing.
[24] Holger Hermanns,et al. What Do We Want From Explainable Artificial Intelligence (XAI)? - A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research , 2021, Artif. Intell..
[25] Marco F. Huber,et al. A Survey on the Explainability of Supervised Machine Learning , 2020, J. Artif. Intell. Res..
[26] Mireille Hildebrandt,et al. Law for Computer Scientists and Other Folk , 2020 .
[27] Vlad I. Morariu,et al. Black-box Explanation of Object Detectors via Saliency Maps , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[28] Aaron Sedley,et al. Exciting, Useful, Worrying, Futuristic: Public Perception of Artificial Intelligence in 8 Countries , 2019, AIES.
[29] Peter A. Flach,et al. Explainability fact sheets: a framework for systematic assessment of explainable approaches , 2019, FAT*.
[30] Brandon M. Greenwell,et al. Interpretable Machine Learning , 2019, Hands-On Machine Learning with R.
[31] Jaime S. Cardoso,et al. Machine Learning Interpretability: A Survey on Methods and Metrics , 2019, Electronics.
[32] Gary Klein,et al. Metrics for Explainable AI: Challenges and Prospects , 2018, ArXiv.
[33] Thomas Lukasiewicz,et al. e-SNLI: Natural Language Inference with Natural Language Explanations , 2018, NeurIPS.
[34] Eric D. Ragan,et al. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems , 2018, ACM Trans. Interact. Intell. Syst..
[35] Chris Russell,et al. Explaining Explanations in AI , 2018, FAT.
[36] M. Kaminski. The right to explanation, explained , 2018, Research Handbook on Information Law and Governance.
[37] Timnit Gebru,et al. Datasheets for datasets , 2018, Commun. ACM.
[38] Michael Veale,et al. Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”? , 2018, IEEE Security & Privacy.
[39] Roland Vogl,et al. Rethinking Explainable Machines: The GDPR's 'Right to Explanation' Debate and the Rise of Algorithmic Audits in Enterprise , 2018 .
[40] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[41] Deborah G. Johnson,et al. Reframing AI Discourse , 2017, Minds and Machines.
[42] Giovanni Comandé,et al. Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation , 2017 .
[43] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[44] Luciano Floridi,et al. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation , 2017 .
[45] Tiffany Curtiss. [91WashLRev1813] Computer Fraud and Abuse Act Enforcement: Cruel, Unusual, and Due for Reform , 2016 .
[46] Seth Flaxman,et al. European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation" , 2016, AI Mag..
[47] Marco Tulio Ribeiro,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, HLT-NAACL Demos.
[48] Radford M. Neal. Pattern Recognition and Machine Learning , 2007, Technometrics.
[49] B. Malle,et al. How People Explain Behavior: A New Theoretical Framework , 1999, Personality and social psychology review : an official journal of the Society for Personality and Social Psychology, Inc.
[50] R. Schifter. White House , 1996 .
[51] Andrew Sheppard,et al. Parliament , 1982, The Lancet.
[52] L. Agustín,et al. European Parliament , 1979, International and Comparative Law Quarterly.
[53] G. Williams. Causation in the Law , 1961, The Cambridge Law Journal.
[54] Sandra Wachter,et al. Trustworthy Artificial Intelligence and the European Union AI Act: On the Conflation of Trustworthiness and the Acceptability of Risk , 2022, SSRN Electronic Journal.
[55] Open Rights Group response to the DCMS policy paper “Establishing a pro-innovation approach to regulating AI” , 2022 .
[56] Ilia Stepin,et al. A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence , 2021, IEEE Access.
[57] P. Hacker,et al. Varieties of AI Explanations Under the Law. From the GDPR to the AIA, and Beyond , 2020, xxAI@ICML.
[58] Maël Pégny,et al. The Right to an Explanation , 2019, Delphi - Interdisciplinary Review of Emerging Technologies.
[59] Marco Tulio Ribeiro,et al. “ Why Should I Trust You ? ” Explaining the Predictions of Any Classifier , 2016 .
[60] Dear Mr Sotiropoulos. ARTICLE 29 Data Protection Working Party , 2013 .
[61] Mireille Hildebrandt,et al. The Dawn of a Critical Transparency Right for the Profiling Era , 2012 .
[62] Gunther Teubner. Breaking Frames: The Global Interplay of Legal and Social Systems , 1997 .
[63] John Mingers,et al. Law as an Autopoietic System , 1995 .
[64] Hengshuai Yao,et al. Explainable Artificial Intelligence for Autonomous Driving: A Comprehensive Overview and Field Guide for Future Research Directions , 2021, ArXiv.