Estimating regression models of finite but unknown order
暂无分享,去创建一个
Abstract This paper considers some problems associated with estimation and inference in the normal linear regression model y t = ∑ j=1 m0 β j x tj +e t , var (e t )=σ 2 , when m 0 is unknown. The regressors are taken to be stochastic and assumed to satisfy V. Grenander's (1954) conditions almost surely. It is further supposed that estimation and inference are undertaken in the usual way, conditional on a value of m 0 chosen to minimize the estimation criterion function EC(m, T)= σ 2 m + mg(T) , with respect to m, where σ2m is the maximum likelihood estimate of σ 2 . It is shown that, subject to weak side conditions, if g(T) → a.s. 0 and Tg(T) → a.s. ∞ then this estimate is weakly consistent. It follows that estimates conditional on the chosen value of m 0 are asymptotically efficient, and inference undertaken in the usual way is justified in large samples. When g(T) converges to a positive constant with probability one, then in large samples m 0 will never be chosen too small, but the probability of choosing m 0 too large remains positive. The results of the paper are stronger than similar ones [R. Shibata (1976), R.J. Bhansali and D.Y. Downham (1977)] in that a known upper bound on m 0 is not assumed. The strengthening is made possible by the assumptions of strictly exogenous regressors and normally distributed disturbances. The main results are used to show that if the model selection criteria of H. Akaike (1974), T. Amemiya (1980), C.L. Mallows (1973) or E. Parzen (1979) are used to choose m 0 in (1), then in the limit the probability of choosing m 0 too large is at least 0.2883. The approach taken by G. Schwarz (1978) leads to a consistent estimator of m 0 , however. These results are illustrated in a small sampling experiment.