A misspecification test for simulation metamodels

In this paper we propose a novel misspecification test for simulation metamodels. It is a consistent test that helps to assess the adequacy of simulation metamodels. The test statistic we construct is shown to be asymptotically normally distributed under the null hypothesis that the metamodel is correct, while diverging to infinity at a rate of √n, where n is the test sample size if the given metamodel is inadequate. Furthermore, as a by-product, we construct confidence intervals for mean squared errors of the metamodels. Preliminary numerical studies show that the test works quite well and has good finite-sample properties.