Assessment of Type I Error Rates for the Statistical Sub-model in NONMEM

The aim of this study was to assess the type I error rate when applying the likelihood ratio (LR) test, for components of the statistical sub-model in NONMEM. Data were simulated from a pharmacokinetic one compartment intravenous bolus model. Two models were fitted to the data, the simulation model and a model containing one additional parameter, and the difference in objective function values between models was calculated. The additional parameter was either (i) a covariate effect on the interindividual variability in CL or V, (ii) a covariate effect on the residual error variability, (iii) a covariance term between CL and V, or (iv) interindividual variability in V. Factors in the simulation conditions (number of individuals and samples per individual, interindividual and residual error magnitude, residual error model) were varied systematically to assess their potential influence on the type I error rate. Different estimation methods within NONMEM were tried. When the first-order conditional estimation method with interaction (FOCE INTER) was used the estimated type I error rates for inclusion of a covariate effect (i) on the interindividual variability, or (ii) on the residual error variability, were in agreement with the type I error rate expected under the assumption that the model approximations made by the estimation method are negligible. When the residual error variability was increased, the type I error rates for (iii) inclusion of covariance between ηCL–ηV were inflated if the underlying residual distribution was lognormal, or if a normal distribution was combined with too little information in the data (too few samples per subject or sampling at uninformative time-points). For inclusion of (iv)ηV, the type I error rates were affected by the underlying residual error distribution; with a normal distribution the estimated type I error rates were close to the expected, while if a non-normal distribution was used the type I errors rates increased with increasing residual variability. When the first-order (FO) estimation method was used the estimated type I error rates were higher than the expected in most situations. For the FOCE INTER method, but not the FO method, the LR test is appropriate when the underlying assumptions of normality of residuals, and of enough information in the data, hold true. Deviations from these assumptions may lead to inflated type I error rates.