Statistical inference for a system of simultaneous, nonlinear, implicit equations is discussed. The discussion considers inference as an adjunct to maximum likelihood estimation rather than in a general setting. The null and non-null asymptotic distributions of the Wald test, the Lagrange multiplier test (Rao's efficient score test), and the likelihood ratio test are obtained. Several refinements in the existing theory of maximum likelihood estimation are accomplished as intermediate steps. IT IS NECESSARY to compute the power of a statistical test in two instances. The first is in the design of an experiment. In design, one is obliged to verify, prior to the expenditure of resources, that an experimental effect would be detected with a reasonably high probability. Peak load electricity pricing experiments come immediately to mind as examples. The second is when failure to reject a null hypothesis is used to claim that the data support the null hypothesis. To validate this claim, it must be shown that candidate alternatives would have been detected with a reasonably high probability. This article sets forth formulas for asymptotic approximations of power for tests commonly used in connection with maximum likelihood estimation for a system of simultaneous, nonlinear, implicit equations. The reader who is interested only in this result should skim Sections 2 and 5 to become familiar with the notation and then read Section 6. See Gallant and Jorgenson [7] for similar formulas if two- and three-stage estimation methods are employed instead. Several refinements in the existing theory of maximum likelihood estimation (Amemiya [2]) are accomplished as intermediate steps in the derivation of the asymptotic approximations. They are as follows. In any theory of nonlinear statistical analysis, various sequences of random functions must converge uniformly in their argument. However, merely listing these sequences and assuming uniform convergence is not very helpful to the practitioner. Conditions which are easily recognized as obtaining or not obtaining in an application are preferabie. Here, the notion of Cesaro summable sequences is used to show that uniform convergence obtains if the log likelihood and its derivatives are dominated by integrable functions. If normal errors are imposed, then it is shown that the requisite domination may be stated in terms of the structural model itself. The critical assumption is that the limit of the log likelihood must have a unique maximum. This implies strong consistency of the maximum likelihood estimator itself, not merely that there exists a solution of the first order conditions which is
[1]
J. Aitchison,et al.
Maximum-Likelihood Estimation of Parameters Subject to Restraints
,
1958
.
[2]
A. Wald.
Note on the Consistency of the Maximum Likelihood Estimate
,
1949
.
[3]
Takeshi Amemiya,et al.
The Maximum Likelihood and the Nonlinear Three-Stage Least Squares Estimator in the General Nonlinear Simultaneous Equation Model
,
1977
.
[4]
Calyampudi R. Rao.
Large sample tests of statistical hypotheses concerning several parameters with applications to problems of estimation
,
1948,
Mathematical Proceedings of the Cambridge Philosophical Society.
[5]
S. R. Searle.
Linear Models
,
1971
.
[6]
W. Stute.
On a generalization of the Glivenko-Cantelli theorem
,
1976
.
[7]
T. W. Anderson.
An Introduction to Multivariate Statistical Analysis
,
1959
.
[8]
R. Jennrich.
Asymptotic Properties of Non-Linear Least Squares Estimators
,
1969
.
[9]
T. W. Anderson,et al.
An Introduction to Multivariate Statistical Analysis
,
1959
.
[10]
E. Malinvaud.
The Consistency of Nonlinear Regressions
,
1970
.
[11]
Franklin M. Fisher,et al.
The identification problem in econometrics
,
1967
.
[12]
A. Ronald Gallant,et al.
Three-stage least-squares estimation for a system of simultaneous, nonlinear, implicit equations
,
1977
.
[13]
P. Phillips.
On the Consistency of Non-Linear FIML
,
1980
.