Toward Design and Evaluation Framework for Interpretable Machine Learning Systems
暂无分享,去创建一个
The need for interpretable and accountable intelligent system gets sensible as artificial intelligence plays more role in human life. Explainable artificial intelligence systems can be a solution by self-explaining the reasoning behind the decisions and predictions of the intelligent system. My research supports the design and evaluation methods and interpretable machine learning systems and leverages knowledge and experience in the fields of machine learning, human-computer interactions, and data visualization. My research objectives are to present a design and evaluation framework for explainable artificial intelligence systems, propose new methods and metrics to better evaluate the benefits of transparent machine learning systems, and apply interpretability methods for model reliability verification.
[1] R. Kennedy,et al. Defense Advanced Research Projects Agency (DARPA). Change 1 , 1996 .
[2] Eric D. Ragan,et al. Combating Fake News with Interpretable News Feed Algorithm , 2018, ArXiv.
[3] Eric D. Ragan,et al. A Human-Grounded Evaluation Benchmark for Local Explanations of Machine Learning , 2018, ArXiv.
[4] Eric D. Ragan,et al. A Survey of Evaluation Methods and Measures for Interpretable Machine Learning , 2018, ArXiv.