An Ensemble Method Based on AdaBoost and Meta-Learning
暂无分享,去创建一个
Stan Matwin | Xuan Liu | Xiaoguang Wang | Nathalie Japkowicz | N. Japkowicz | S. Matwin | Xuan Liu | Xiaoguang Wang
[1] J. Ross Quinlan,et al. C4.5: Programs for Machine Learning , 1992 .
[2] Ron Kohavi,et al. Bias Plus Variance Decomposition for Zero-One Loss Functions , 1996, ICML.
[3] David H. Wolpert,et al. No free lunch theorems for optimization , 1997, IEEE Trans. Evol. Comput..
[4] David W. Aha,et al. Lazy Learning , 1997, Springer Netherlands.
[5] João Gama,et al. Combining classification algorithms , 2000 .
[6] Pedro M. Domingos,et al. On the Optimality of the Simple Bayesian Classifier under Zero-One Loss , 1997, Machine Learning.
[7] Robert C. Holte,et al. Very Simple Classification Rules Perform Well on Most Commonly Used Datasets , 1993, Machine Learning.
[8] Geoffrey I. Webb,et al. Multistrategy ensemble learning: reducing error by combining ensemble learning techniques , 2004, IEEE Transactions on Knowledge and Data Engineering.
[9] Eric Bauer,et al. An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants , 1999, Machine Learning.
[10] R. Schapire. The Strength of Weak Learnability , 1990, Machine Learning.
[11] Sotiris B. Kotsiantis,et al. Machine learning: a review of classification and combining techniques , 2006, Artificial Intelligence Review.
[12] Mohak Shah,et al. Evaluating Learning Algorithms: A Classification Perspective , 2011 .
[13] Stan Matwin,et al. Extending AdaBoost to Iteratively Vary Its Base Classifiers , 2011, Canadian Conference on AI.