Model evaluation and information criteria in covariance structure analysis

The normal theory based likelihood ratio test statistic, often used to evaluate the goodness of fit of a model in covariance structure analysis, has several problems that may make it inflexible and sometimes unreliable. We introduce criteria for evaluating the models in covariance structure analysis from an information-theoretic point of view. The basic idea behind the present approach is to express a model in the form of a probability distribution and then evaluate the model by the Kullback-Leibler information. We consider four types of information criteria that are constructed by correcting the upward bias of the sample based log-likelihood as a natural estimate of the Kullback-Leibler information or, equivalently, the expected log-likelihood. Monte Carlo experiments are conducted to examine the performance of the information criteria under various sample sizes and degrees of deviations from both structural and distributional assumptions. We show that the variance of the bootstrap bias estimate caused by bootstrap simulation can be considerably reduced without any analytical derivations.