Putting Accountability of AI Systems into Practice
暂无分享,去创建一个
To improve and ensure trustworthiness and ethics on Artificial Intelligence (AI) systems, several initiatives around the globe are producing principles and recommendations, which are providing to be difficult to translate into technical solutions. A common trait among ethical AI requirements is accountability that aims at ensuring responsibility, auditability, and reduction of negative impact of AI systems. To put accountability into practice, this paper presents the Global-view Accountability Framework (GAF) that considers auditability and redress of conflicting information arising from a context with two or more AI systems which can produce a negative impact. A technical implementation of the framework for automotive and motor insurance is demonstrated, where the focus is on preventing and reporting harm rendered by autonomous vehicles.
[1] Francisco Herrera,et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI , 2020, Inf. Fusion.
[2] Alexandra Chouldechova,et al. The Frontiers of Fairness in Machine Learning , 2018, ArXiv.
[3] Julius Adebayo,et al. FairML : ToolBox for diagnosing bias in predictive modeling , 2016 .
[4] Amina Adadi,et al. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.