An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization
暂无分享,去创建一个
[1] Kamal A. Ali. A Comparison of Methods for Learning and Combining Evidence From Multiple Models , 1995 .
[2] Ron Kohavi,et al. Data Mining using MLC , 1996 .
[3] Yoav Freund,et al. Experiments with a New Boosting Algorithm , 1996, ICML.
[4] J. Ross Quinlan,et al. Bagging, Boosting, and C4.5 , 1996, AAAI/IAAI, Vol. 1.
[5] Leo Breiman,et al. Bias, Variance , And Arcing Classifiers , 1996 .
[6] L. Breiman. Heuristics of instability and stabilization in model selection , 1996 .
[7] David W. Opitz,et al. An Empirical Evaluation of Bagging and Boosting , 1997, AAAI/IAAI.
[8] Ron Kohavi,et al. Option Decision Trees with Majority Votes , 1997, ICML.
[9] Ron Kohavi,et al. Data Mining Using MLC a Machine Learning Library in C++ , 1996, Int. J. Artif. Intell. Tools.
[10] Thomas G. Dietterich,et al. Pruning Adaptive Boosting , 1997, ICML.
[11] Thomas G. Dietterich. Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms , 1998, Neural Computation.
[12] Catherine Blake,et al. UCI Repository of machine learning databases , 1998 .
[13] Michael J. Pazzani,et al. Error reduction through learning multiple descriptions , 2004, Machine Learning.
[14] Leo Breiman,et al. Bagging Predictors , 1996, Machine Learning.
[15] Eric Bauer,et al. An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants , 1999, Machine Learning.