Improvement over bayes prediction in small samples in the presence of model uncertainty

In an online prediction context, the authors introduce a new class of mongrel criteria that allow for the weighing of candidate models and the combination of their predictions based both on model-based and empirical measures of their performance. They present simulation results which show that model averaging using the mongrel-derived weights leads, in small samples, to predictions that are more accurate than that obtained by Bayesian weight updating, provided that none of the candidate models is too distant from the data generator.