Bootstrap prediction and Bayesian prediction under misspecified models

We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's 'bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, both prediction methods are compared by using the Kullback-Leibler loss under the assumption that the model does not contain the true distribution. We show that bootstrap prediction is asymptotically more effective than Bayesian prediction under misspecified models.