An Empirical Comparison of Supervised Ensemble Learning Approaches
暂无分享,去创建一个
[1] Yoav Freund,et al. A decision-theoretic generalization of on-line learning and an application to boosting , 1995, EuroCOLT.
[2] Leo Breiman,et al. Random Forests , 2001, Machine Learning.
[3] Friedhelm Schwenker,et al. Ensemble Methods: Foundations and Algorithms [Book Review] , 2013, IEEE Computational Intelligence Magazine.
[4] Thomas G. Dietterich,et al. Pruning Adaptive Boosting , 1997, ICML.
[5] Pierre Geurts,et al. Extremely randomized trees , 2006, Machine Learning.
[6] Leo Breiman,et al. Randomizing Outputs to Increase Prediction Accuracy , 2000, Machine Learning.
[7] Daniel Hernández-Lobato,et al. How large should ensembles of classifiers be? , 2013, Pattern Recognit..
[8] Roger E Bumgarner,et al. Comparative hybridization of an array of 21,500 ovarian cDNAs for the discovery of genes overexpressed in ovarian carcinomas. , 1999, Gene.
[9] Chun-Xia Zhang,et al. RotBoost: A technique for combining Rotation Forest and AdaBoost , 2008, Pattern Recognit. Lett..
[10] Leo Breiman,et al. Bias, Variance , And Arcing Classifiers , 1996 .
[11] Rich Caruana,et al. Data mining in metric space: an empirical analysis of supervised learning performance criteria , 2004, ROCAI.
[12] Janez Demsar,et al. Statistical Comparisons of Classifiers over Multiple Data Sets , 2006, J. Mach. Learn. Res..
[13] Catherine Blake,et al. UCI Repository of machine learning databases , 1998 .
[14] Bianca Zadrozny,et al. Obtaining calibrated probability estimates from decision trees and naive Bayesian classifiers , 2001, ICML.
[15] Jill P. Mesirov,et al. Class prediction and discovery using gene expression data , 2000, RECOMB '00.
[16] Rich Caruana,et al. An empirical comparison of supervised learning algorithms , 2006, ICML.
[17] Juan José Rodríguez Diez,et al. Rotation Forest: A New Classifier Ensemble Method , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[18] Tin Kam Ho,et al. The Random Subspace Method for Constructing Decision Forests , 1998, IEEE Trans. Pattern Anal. Mach. Intell..
[19] Gonzalo Mart. Switching Class Labels to Generate Classication Ensembles , 2005 .
[20] J. Mesirov,et al. Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. , 1999, Science.
[21] Gonzalo Martínez-Muñoz,et al. Switching class labels to generate classification ensembles , 2005, Pattern Recognit..
[22] Tony Jebara,et al. Variance Penalizing AdaBoost , 2011, NIPS.
[23] Nir Friedman,et al. Tissue classification with gene expression profiles. , 2000 .
[24] Y. Freund,et al. Discussion of the Paper \additive Logistic Regression: a Statistical View of Boosting" By , 2000 .
[25] Rich Caruana,et al. Predicting good probabilities with supervised learning , 2005, ICML.
[26] Gilles Louppe,et al. Ensembles on Random Patches , 2012, ECML/PKDD.
[27] Eric Bauer,et al. An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants , 1999, Machine Learning.
[28] Leo Breiman,et al. Bagging Predictors , 1996, Machine Learning.
[29] Thomas G. Dietterich,et al. Error-Correcting Output Coding Corrects Bias and Variance , 1995, ICML.
[30] De-Shuang Huang,et al. Cancer classification using Rotation Forest , 2008, Comput. Biol. Medicine.