A Comparison of Methods for Learning and Combining Evidence From Multiple Models

Most previous work on multiple models has been done on a few domains. We present a com-parsion of three ways of learning multiple models on 29 data sets from the UCI repository. The methods are bagging, k-fold partition learning and stochastic search. By using 29 data sets of various kinds-artiicial data sets, artiicial data sets with noise, molecular-biology and real-world noisy data sets-we are able to draw robust experimental conclusions about the kinds of data sets for which each learning method works best. We also compare four evidence combination methods (Uniform Voting, Bayesian Combination, Distribution Summation and Likelihood Combination) and characterize the kinds of data sets for which each method works best.