Falling Rule Lists

Falling rule lists are classification models consisting of an ordered list of if-then rules, where (i) the order of rules determines which example should be classified by each rule, and (ii) the estimated probability of success decreases monotonically down the list. These kinds of rule lists are inspired by healthcare applications where patients would be stratified into risk sets and the highest at-risk patients should be considered first. We provide a Bayesian framework for learning falling rule lists that does not rely on traditional greedy decision tree learning methods.

[1]  J. Ranson,et al.  Prognostic signs and the role of operative management in acute pancreatitis. , 1974, Surgery, gynecology & obstetrics.

[2]  D. E. Lawrence,et al.  APACHE—acute physiology and chronic health evaluation: a physiologically based classification system , 1981, Critical care medicine.

[3]  E. Draper,et al.  APACHE II: A severity of disease classification system , 1985, Critical care medicine.

[4]  Ronald L. Rivest,et al.  Learning decision lists , 2004, Machine Learning.

[5]  W. Knaus,et al.  The APACHE III prognostic system. Risk prediction of hospital mortality for critically ill hospitalized adults. , 1991, Chest.

[6]  J. Ross Quinlan,et al.  C4.5: Programs for Machine Learning , 1992 .

[7]  Luc De Raedt,et al.  Inductive Logic Programming: Theory and Methods , 1994, J. Log. Program..

[8]  金田 重郎,et al.  C4.5: Programs for Machine Learning (書評) , 1995 .

[9]  H. Chipman,et al.  Bayesian CART Model Search , 1998 .

[10]  E. Antman,et al.  TIMI Risk Score for ST-Elevation Myocardial Infarction: A Convenient, Bedside, Clinical Score for Risk Assessment at Presentation: An Intravenous nPA for Treatment of Infarcting Myocardium Early II Trial Substudy , 2000, Circulation.

[11]  Michael J. Pazzani,et al.  Knowledge discovery from data? , 2000, IEEE Intell. Syst..

[12]  M. Rich,et al.  Validation of clinical classification schemes for predicting stroke: results from the National Registry of Atrial Fibrillation. , 2001, JAMA.

[13]  E. Antman,et al.  The TIMI risk score for unstable angina/non–ST-elevation MI: a method for prognostication and therapeutic decision making , 2001 .

[14]  M J Pazzani,et al.  Acceptance of Rules Generated by Machine Learning among Medical Experts , 2001, Methods of Information in Medicine.

[15]  D. Dunson Bayesian Isotonic Regression for Discrete Outcomes , 2003 .

[16]  A. J. Feelders,et al.  Pruning for Monotone Classification Trees , 2003, IDA.

[17]  Christian Borgelt,et al.  Advances in Intelligent Data Analysis V , 2003, Lecture Notes in Computer Science.

[18]  Arie Ben-David,et al.  Monotonicity maintenance in information-theoretic machine learning algorithms , 2004, Machine Learning.

[19]  Robert C. Holte,et al.  Very Simple Classification Rules Perform Well on Most Commonly Used Datasets , 1993, Machine Learning.

[20]  J. Ross Quinlan,et al.  Induction of Decision Trees , 1986, Machine Learning.

[21]  Thomas G. Dietterich,et al.  Learning from Sparse Data by Exploiting Monotonicity Constraints , 2005, UAI.

[22]  Bart Goethals,et al.  Proceedings of the 1st international workshop on open source data mining: frequent pattern mining implementations , 2005, KDD 2005.

[23]  Luc De Raedt,et al.  nFOIL: Integrating Naïve Bayes and FOIL , 2005, AAAI.

[24]  Christian Borgelt,et al.  An implementation of the FP-growth algorithm , 2005 .

[25]  Stefan Rüping,et al.  Learning interpretable models , 2006 .

[26]  M. Elter,et al.  The prediction of breast cancer biopsy outcomes using two CAD approaches that both emphasize an intelligible decision process. , 2007, Medical physics.

[27]  Fadi A. Thabtah,et al.  A review of associative classification mining , 2007, The Knowledge Engineering Review.

[28]  Tom Fawcett PRIE: a system for generating rulelists to maximize ROC performance , 2008, Data Mining and Knowledge Discovery.

[29]  Bart Baesens,et al.  Building Acceptable Classification Models , 2010, Data Mining.

[30]  Bart Baesens,et al.  An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models , 2011, Decis. Support Syst..

[31]  Wei-Yin Loh,et al.  Classification and regression trees , 2011, WIREs Data Mining Knowl. Discov..

[32]  Bart Baesens,et al.  Building comprehensible customer churn prediction models with advanced rule induction techniques , 2011, Expert Syst. Appl..

[33]  D. V. Dyk,et al.  Partially Collapsed Gibbs Sampling and Path-Adaptive Metropolis–Hastings in High-Energy Astrophysics , 2011 .

[34]  Bart Baesens,et al.  Performance of classification models from a user perspective , 2011, Decis. Support Syst..

[35]  Niklas Lavesson,et al.  User-oriented Assessment of Classification Model Understandability , 2011, SCAI.

[36]  C. Rudin,et al.  Building Interpretable Classifiers with Rules using Bayesian Analysis , 2012 .

[37]  Alex Alves Freitas,et al.  Comprehensible classification models: a position paper , 2014, SKDD.

[38]  T. McCormick,et al.  An interpretable model for stroke prediction using rules and Bayesian analysis , 2014 .