Interpretable Machine Learning from Granular Computing Perspective

Machine Learning (ML) is a method that aims to learn from data to identify patterns and make predictions. Nowadays ML models have become ubiquitous, there are so many services that people use in their daily life, consequently, those systems affect in very ways to the final users. Recently, there is a special interest on the right of the final user to know why the system generates some output; this field is called Interpretable Machine Learning (IML). Granular Computing (GrC) paradigm is focused in knowledge modeling inspired by human thinking. In this work we conduct a survey of the state of the art in IML and GrC fields to settle the bases of the possible contribution of each other with aims to build more interpretable and accurately ML models.

[1]  L. Ungar,et al.  MediBoost: a Patient Stratification Tool for Interpretable Decision Making in the Era of Precision Medicine , 2016, Scientific Reports.

[2]  Barbara Hammer,et al.  Interpretable machine learning with reject option , 2018, Autom..

[3]  Stefania Tomasiello,et al.  Granularity into Functional Networks , 2017, 2017 3rd IEEE International Conference on Cybernetics (CYBCON).

[4]  Cynthia Rudin,et al.  Bayesian Rule Sets for Interpretable Classification , 2016, 2016 IEEE 16th International Conference on Data Mining (ICDM).

[5]  Shyi-Ming Chen,et al.  Granular Computing and Intelligent Systems , 2011 .

[6]  Oluwasanmi Koyejo,et al.  Examples are not enough, learn to criticize! Criticism for Interpretability , 2016, NIPS.

[7]  Hongjie Jia,et al.  Granular neural networks , 2014, Artificial Intelligence Review.

[8]  Monica Z. Weiland,et al.  Gaussian Process Regression for Predictive But Interpretable Machine Learning Models: An Example of Predicting Mental Workload across Tasks , 2017, Front. Hum. Neurosci..

[9]  Witold Pedrycz,et al.  Granular Representation of Data: A Design of Families of ϵ-Information Granules , 2018, IEEE Transactions on Fuzzy Systems.

[10]  Haifeng Liu,et al.  Using Machine Learning Models to Predict In-Hospital Mortality for ST-Elevation Myocardial Infarction Patients , 2017, MedInfo.

[11]  Brian Beaton,et al.  Crucial Answers about Humanoid Capital , 2018, HRI.

[12]  Gilles Bisson,et al.  Multi-operator Decision Trees for Explainable Time-Series Classification , 2018, IPMU.

[13]  Chun Chen,et al.  Challenges and opportunities: from big data to knowledge in AI 2.0 , 2017, Frontiers of Information Technology & Electronic Engineering.

[14]  Vaishak Belle,et al.  Logic meets Probability: Towards Explainable AI Systems for Uncertain Worlds , 2017, IJCAI.

[15]  Frank-Michael Schleif,et al.  Learning interpretable kernelized prototype-based models , 2014, Neurocomputing.

[16]  George Panoutsos,et al.  A neural-fuzzy modelling framework based on granular computing: Concepts and applications , 2010, Fuzzy Sets Syst..

[17]  Joseph Jay Williams,et al.  Enhancing Online Problems Through Instructor-Centered Tools for Randomized Experiments , 2018, CHI.

[18]  Tim Miller,et al.  Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..

[19]  Koen Vanhoof,et al.  Fuzzy-Rough Cognitive Networks , 2018, Neural Networks.

[20]  Christoph Molnar,et al.  Interpretable Machine Learning , 2020 .

[21]  Seth Flaxman,et al.  European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation" , 2016, AI Mag..

[22]  Wenjian Wang,et al.  Granular support vector machine: a review , 2017, Artificial Intelligence Review.

[23]  Kush R. Varshney,et al.  Interpretable machine learning via convex cardinal shape composition , 2016, 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[24]  Jure Leskovec,et al.  Interpretable Decision Sets: A Joint Framework for Description and Prediction , 2016, KDD.

[25]  Guanying Wang,et al.  A new method for constructing granular neural networks based on rule extraction and extreme learning machine , 2015, Pattern Recognit. Lett..

[26]  Kush R. Varshney,et al.  Engineering safety in machine learning , 2016, 2016 Information Theory and Applications Workshop (ITA).

[27]  Anna Maria Fanelli,et al.  Interpretability constraints for fuzzy information granulation , 2008, Inf. Sci..

[28]  Klaus-Robert Müller,et al.  "What is relevant in a text document?": An interpretable machine learning approach , 2016, PloS one.

[29]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[30]  James B. Brown,et al.  Iterative random forests to discover predictive and stable high-order interactions , 2017, Proceedings of the National Academy of Sciences.

[31]  Mark A. Neerincx,et al.  ICM: An Intuitive Model Independent and Accurate Certainty Measure for Machine Learning , 2018, ICAART.

[32]  Lotfi A. Zadeh,et al.  Fuzzy Sets , 1996, Inf. Control..