Seeing the Forest Through the Trees: Learning a Comprehensible Model from an Ensemble
暂无分享,去创建一个
[1] Yoav Freund,et al. Experiments with a New Boosting Algorithm , 1996, ICML.
[2] J. Ross Quinlan,et al. Boosting First-Order Learning , 1996, ALT.
[3] Mark Craven,et al. Extracting comprehensible models from trained neural networks , 1996 .
[4] Thomas Richardson,et al. Interpretable Boosted Naïve Bayes Classification , 1998, KDD.
[5] Pedro M. Domingos. Knowledge Discovery Via Multiple Models , 1998, Intell. Data Anal..
[6] Catherine Blake,et al. UCI Repository of machine learning databases , 1998 .
[7] Thomas G. Dietterich. Ensemble Methods in Machine Learning , 2000, Multiple Classifier Systems.
[8] Ian Witten,et al. Data Mining , 2000 .
[9] Susanne Hoche,et al. Relational Learning Using Constrained Confidence-Rated Boosting , 2001, ILP.
[10] José Hernández-Orallo,et al. From Ensemble Methods to Comprehensible Models , 2002, Discovery Science.
[11] David Page,et al. An Empirical Evaluation of Bagging in Inductive Logic Programming , 2002, ILP.
[12] Zhi-Hua Zhou,et al. Extracting symbolic rules from trained neural network ensembles , 2003, AI Commun..
[13] Leo Breiman,et al. Random Forests , 2001, Machine Learning.
[14] Peter D. Turney. Technical note: Bias and the quantification of stability , 1995, Machine Learning.
[15] J. Ross Quinlan,et al. Induction of Decision Trees , 1986, Machine Learning.
[16] Leo Breiman,et al. Bagging Predictors , 1996, Machine Learning.
[17] Eric Bauer,et al. An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants , 1999, Machine Learning.
[18] Saso Dzeroski,et al. First order random forests: Learning relational classifiers with complex aggregates , 2006, Machine Learning.
[19] Hendrik Blockeel,et al. Seeing the Forest Through the Trees , 2007, ILP.