An empirical investigation into the interpretability of data mining models based on decision trees, tables and rules

While data mining research has largely focused on developing ever more accurate predictive models, a much smaller body of research has investigated to which extent these models are actually interpretable by decision makers. Given the importance of this aspect on the model's validation, acceptance and successful application, we will discuss an experimental study in which we empirically compare the interpretability of various representation forms, viz. decision tables, decision trees, propositional rules and oblique rules, as well as explore the effect of size or complexity on their usefulness.