Scenario-Based Requirements Elicitation for User-Centric Explainable AI - A Case in Fraud Detection

Explainable Artificial Intelligence (XAI) develops technical explanation methods and enable interpretability for human stakeholders on why Artificial Intelligence (AI) and machine learning (ML) models provide certain predictions. However, the trust of those stakeholders into AI models and explanations is still an issue, especially domain experts, who are knowledgeable about their domain but not AI inner workings. Social and user-centric XAI research states it is essential to understand the stakeholder’s requirements to provide explanations tailored to their needs, and enhance their trust in working with AI models. Scenario-based design and requirements elicitation can help bridge the gap between social and operational aspects of a stakeholder early before the adoption of information systems and identify its real problem and practices generating user requirements. Nevertheless, it is still rarely explored the adoption of scenarios in XAI, especially in the domain of fraud detection to supporting experts who are about to work with AI models. We demonstrate the usage of scenario-based requirements elicitation for XAI in a fraud detection context, and develop scenarios derived with experts in banking fraud. We discuss how those scenarios can be adopted to identify user or expert requirements for appropriate explanations in his daily operations and to make decisions on reviewing fraudulent cases in banking. The generalizability of the scenarios for further adoption is validated through a systematic literature review in domains of XAI and visual analytics for fraud detection.

[1]  Alun D. Preece,et al.  Stakeholders in Explainable AI , 2018, ArXiv.

[2]  WestJarrod,et al.  Intelligent financial fraud detection , 2016 .

[3]  Francisco Herrera,et al.  Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI , 2020, Inf. Fusion.

[4]  Khalil El-Khatib,et al.  A Service Architecture Using Machine Learning to Contextualize Anomaly Detection , 2020, J. Database Manag..

[5]  Maumita Bhattacharya,et al.  Intelligent Financial Fraud Detection: A Comprehensive Review , 2015 .

[6]  William J. Clancey,et al.  Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI , 2019, ArXiv.

[7]  Tamara Munzner,et al.  A Nested Model for Visualization Design and Validation , 2009, IEEE Transactions on Visualization and Computer Graphics.

[8]  Igor V. Kotenko,et al.  Interactive Multi-View Visualization for Fraud Detection in Mobile Money Transfer Services , 2014, Int. J. Mob. Comput. Multim. Commun..

[9]  Ryan Turner,et al.  A model explanation system , 2016, 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP).

[10]  Vukosi Marivate,et al.  Bringing sequential feature explanations to life , 2017, 2017 IEEE AFRICON.

[11]  MunznerTamara A Nested Model for Visualization Design and Validation , 2009 .

[12]  Md. Rafiqul Islam,et al.  A survey of anomaly detection techniques in financial domain , 2016, Future Gener. Comput. Syst..

[13]  Walter Didimo,et al.  Network visualization for financial crime detection , 2014, J. Vis. Lang. Comput..

[14]  David Gunning,et al.  DARPA's explainable artificial intelligence (XAI) program , 2019, IUI.

[15]  Jing Huang,et al.  Interpretable Convolutional Neural Networks with Dual Local and Global Attention for Review Rating Prediction , 2017, RecSys.

[16]  Mohan S. Kankanhalli,et al.  Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda , 2018, CHI.

[17]  Christine T. Wolf Explainability scenarios: towards scenario-based XAI design , 2019, IUI.

[18]  Wojciech Samek,et al.  Explainable ai – preface , 2019 .

[19]  Filip Karlo Dosilovic,et al.  Explainable artificial intelligence: A survey , 2018, 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO).

[20]  Silvia Miksch,et al.  Visual Analytics for Fraud Detection: Focusing on Profile Analysis , 2016, EuroVis.

[21]  Andreas Holzinger,et al.  Interactive machine learning for health informatics: when do we need the human-in-the-loop? , 2016, Brain Informatics.

[22]  Dik Lun Lee,et al.  iForest: Interpreting Random Forests via Visual Analytics , 2019, IEEE Transactions on Visualization and Computer Graphics.

[23]  Avanti Shrikumar,et al.  Learning Important Features Through Propagating Activation Differences , 2017, ICML.

[24]  Ruth M. J. Byrne,et al.  Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning , 2019, IJCAI.

[25]  S. Benson Edwin Raj,et al.  Analysis on credit card fraud detection methods , 2011 .

[26]  Stefanie Rinderle-Ma,et al.  Mining association rules for anomaly detection in dynamic process runtime behavior and explaining the root cause to users , 2020, Inf. Syst..

[27]  Christoph Molnar,et al.  Interpretable Machine Learning , 2020 .

[28]  Andreas Holzinger,et al.  Measuring the Quality of Explanations: The System Causability Scale (SCS) , 2020, KI - Künstliche Intelligenz.

[29]  William Ribarsky,et al.  WireVis: Visualization of Categorical, Time-Varying Data From Financial Transactions , 2007, 2007 IEEE Symposium on Visual Analytics Science and Technology.

[30]  Deysy Galeana Perez,et al.  Outlier Detection Applying an Innovative User Transaction Modeling with Automatic Explanation , 2011, 2011 IEEE Electronics, Robotics and Automotive Mechanics Conference.

[31]  Christian Biemann,et al.  What do we need to build explainable AI systems for the medical domain? , 2017, ArXiv.

[32]  Vaclav Zeman,et al.  EasyMiner.eu: Web framework for interpretable machine learning based on rules and frequent itemsets , 2018, Knowl. Based Syst..

[33]  Qian Yang,et al.  Designing Theory-Driven User-Centric Explainable AI , 2019, CHI.

[34]  Jun Zhou,et al.  A Semi-Supervised Graph Attentive Network for Financial Fraud Detection , 2019, 2019 IEEE International Conference on Data Mining (ICDM).

[35]  Walter Didimo,et al.  Vis4AUI: Visual Analysis of Banking Activity Networks , 2012, GRAPP/IVAPP.

[36]  Kishore Singh,et al.  Interactive visual analysis of anomalous accounts payable transactions in SAP enterprise systems , 2016 .

[37]  Dominik Olszewski,et al.  Fraud detection using self-organizing map visualizing the user profiles , 2014, Knowl. Based Syst..

[38]  Igor Kotenko,et al.  Visualization-Driven Approach to Fraud Detection in the Mobile Money Transfer Services , 2019, Algorithms, Methods, and Applications in Mobile Computing and Communications.

[39]  William N. Dilla,et al.  Data visualization for fraud detection: Practice implications and a call for future research , 2015, Int. J. Account. Inf. Syst..

[40]  Cesare Alippi,et al.  Credit Card Fraud Detection: A Realistic Modeling and a Novel Learning Strategy , 2018, IEEE Transactions on Neural Networks and Learning Systems.

[41]  Morten Hertzum,et al.  Making use of scenarios: a field study of conceptual design , 2003, Int. J. Hum. Comput. Stud..

[42]  Mark Bilandzic,et al.  Bringing Transparency Design into Practice , 2018, IUI.

[43]  Antonios Symvonis,et al.  A fraud detection visualization system utilizing radial drawings and heat-maps , 2013, 2014 International Conference on Information Visualization Theory and Applications (IVAPP).

[44]  Daniel A. Keim,et al.  Visual analytics of large multidimensional data using variable binned scatter plots , 2010, Electronic Imaging.

[45]  Mennatallah El-Assady,et al.  explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning , 2019, IEEE Transactions on Visualization and Computer Graphics.

[46]  Stefano Zanero,et al.  BankSealer: An Online Banking Fraud Analysis and Decision Support System , 2014, SEC.

[47]  John M. Carroll,et al.  Becoming social: Expanding scenario-based approaches in HCI , 1996, Behav. Inf. Technol..

[48]  Eric D. Ragan,et al.  A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems , 2018, ACM Trans. Interact. Intell. Syst..

[49]  Christos Faloutsos,et al.  Beyond Outlier Detection: LookOut for Pictorial Explanation , 2018, ECML/PKDD.

[50]  Markus Helfert,et al.  Customer Purchase Behavior Prediction in E-commerce: A Conceptual Framework and Research Agenda , 2019, NFMCP@PKDD/ECML.

[51]  Silvia Miksch,et al.  Visual Analytics for fraud detection and monitoring , 2015, 2015 IEEE Conference on Visual Analytics Science and Technology (VAST).

[52]  Markus Helfert,et al.  Towards Preprocessing Guidelines for Neural Network Embedding of Customer Behavior in Digital Retail , 2019, ISCSIC.

[53]  Manuel Mejia-Lavalle,et al.  Outlier Detection with Innovative Explanation Facility over a Very Large Financial Database , 2010, 2010 IEEE Electronics, Robotics and Automotive Mechanics Conference.

[54]  Jarke J. van Wijk,et al.  ExplainExplore: Visual Exploration of Machine Learning Explanations , 2020, 2020 IEEE Pacific Visualization Symposium (PacificVis).

[55]  Jing Yang,et al.  Scable and interactive visual analysis of financal wire transactions for fraud detection , 2008 .

[56]  David S. Ebert,et al.  A Survey on Visual Analysis Approaches for Financial Data , 2016, Comput. Graph. Forum.

[57]  Mariusz Chmielewski,et al.  Money Laundering Analytics Based on Contextual Analysis. Application of Problem Solving Ontologies in Financial Fraud Identification and Recognition , 2016, ISAT.

[58]  Tim Miller,et al.  "But why?" Understanding explainable artificial intelligence , 2019, XRDS.

[59]  Bertram F. Malle,et al.  Time to Give Up the Dogmas of Attribution , 2011 .

[60]  Wolfgang Garn,et al.  How Artificial Intelligence and machine learning research impacts payment card fraud detection: A survey and industry benchmark , 2018, Eng. Appl. Artif. Intell..

[61]  Freddy Lécué,et al.  Explainable AI: The New 42? , 2018, CD-MAKE.

[62]  Silvia Miksch,et al.  Network Analysis for Financial Fraud Detection , 2018, EuroVis.

[63]  Lubos Popelínský,et al.  DGRMiner: Anomaly Detection and Explanation in Dynamic Graphs , 2016, IDA.

[64]  Markus Helfert,et al.  The Shift from Financial to Non-financial Measures During Transition into Digital Retail - A Systematic Literature Review , 2019, BIS.

[65]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[66]  Andreas Kerren,et al.  A survey of surveys on the use of visualization for interpreting machine learning models , 2020, Inf. Vis..

[67]  Martin Maguire,et al.  User Requirements Analysis , 2002 .

[68]  Kurt Schneider,et al.  Explainability as a non-functional requirement: challenges and recommendations , 2020, Requirements Engineering.

[69]  Silvia Miksch,et al.  Visual analytics for event detection: Focusing on fraud , 2018, Vis. Informatics.

[70]  Markus Helfert,et al.  Omnichannel Value Chain: Mapping Digital Technologies for Channel Integration Activities , 2019, ISD.

[71]  Jing Yang,et al.  VAET: A Visual Analytics Approach for E-Transactions Time-Series , 2014, IEEE Transactions on Visualization and Computer Graphics.

[72]  Markus Helfert,et al.  Reference Model in Design Science Research to Gather and Model Information , 2012, AMCIS.

[73]  Tim Miller,et al.  Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences , 2017, ArXiv.

[74]  F D Hobbs,et al.  Can computerised decision support systems deliver improved quality in primary care? , 1999, BMJ.

[75]  Glenn J. Browne,et al.  An Empirical Investigation of User Requirements Elicitation: Comparing the Effectiveness of Prompting Techniques , 2001, J. Manag. Inf. Syst..

[76]  Lyndsey Franklin,et al.  Toward a visualization-supported workflow for cyber alert management using threat models and human-centered design , 2017, 2017 IEEE Symposium on Visualization for Cyber Security (VizSec).

[77]  Georg Langs,et al.  Causability and explainability of artificial intelligence in medicine , 2019, WIREs Data Mining Knowl. Discov..

[78]  Dietmar Nedbal,et al.  A problem-centered analysis of enterprise social software projects , 2017 .

[79]  Amina Adadi,et al.  Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.

[80]  Dimitri Bohlender,et al.  Explainability as a Non-Functional Requirement , 2019, 2019 IEEE 27th International Requirements Engineering Conference (RE).

[81]  Markus Helfert,et al.  The Role of User Emotions for Content Personalization in e-Commerce: Literature Review , 2019, HCI.

[82]  Jingrui He,et al.  RCLens: Interactive Rare Category Exploration and Identification , 2018, IEEE Transactions on Visualization and Computer Graphics.

[83]  Neville A. Stanton,et al.  Book preview , 2003, INTR.

[84]  Hlomani Hlomani,et al.  Combating credit card fraud with online behavioural targeting and device fingerprinting , 2019 .

[85]  Richard T. Watson,et al.  Analyzing the Past to Prepare for the Future: Writing a Literature Review , 2002, MIS Q..

[86]  Scott Lundberg,et al.  A Unified Approach to Interpreting Model Predictions , 2017, NIPS.

[87]  Jichen Zhu,et al.  Explainable AI for Designers: A Human-Centered Perspective on Mixed-Initiative Co-Creation , 2018, 2018 IEEE Conference on Computational Intelligence and Games (CIG).

[88]  David J. Hand,et al.  Statistical fraud detection: A review , 2002 .

[89]  Mariusz Chmielewski,et al.  Hidden information retrieval and evaluation method and tools utilising ontology reasoning applied for financial fraud analysis. , 2018 .

[90]  Tim Miller,et al.  Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..

[91]  Mykola Pechenizkiy,et al.  A Human-Grounded Evaluation of SHAP for Alert Processing , 2019, ArXiv.

[92]  Chengqi Zhang,et al.  Developing Actionable Trading Strategies for Trading Agents , 2007, 2007 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT'07).

[93]  Xia Hu,et al.  Techniques for interpretable machine learning , 2018, Commun. ACM.

[94]  Silvia Miksch,et al.  EVA: Visual Analytics to Identify Fraudulent Events , 2018, IEEE Transactions on Visualization and Computer Graphics.

[95]  Maria Zhdanova,et al.  Fraud Detection in Mobile Payments Utilizing Process Behavior Analysis , 2013, 2013 International Conference on Availability, Reliability and Security.

[96]  Walter Didimo,et al.  An advanced network visualization system for financial crime detection , 2011, 2011 IEEE Pacific Visualization Symposium.

[97]  Carlos Guestrin,et al.  Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.

[98]  Helen Sharp,et al.  The role of distances in requirements communication: a case study , 2017, Requirements Engineering.

[99]  Xin Liu,et al.  FraudVis: Understanding Unsupervised Fraud Detection Algorithms , 2018, 2018 IEEE Pacific Visualization Symposium (PacificVis).

[100]  Mao Lin Huang,et al.  A Visualization Approach for Frauds Detection in Financial Market , 2009, 2009 13th International Conference Information Visualisation.

[101]  Takayuki Ito,et al.  A Transactional Relationship Visualization System in Internet Auctions , 2007, 2007 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT'07).

[102]  Shubhomoy Das,et al.  Active Anomaly Detection via Ensembles: Insights, Algorithms, and Interpretability , 2019, ArXiv.

[103]  Lior Rokach,et al.  Explaining Anomalies Detected by Autoencoders Using SHAP , 2019, ArXiv.