Optimal estimation of families of models
暂无分享,去创建一个
Given a class of parametric models M<sub>k</sub> = {f(x<sup>n</sup>; thetas, k) : thetas = thetas<sub>1</sub>, <sub>hellip</sub> , thetas<sub>k</sub> isin Omega<sup>k</sup>}, where x<sup>n</sup> = x<sub>1</sub>,<sub>hellip</sub>, x<sub>n</sub> denotes real-valued data and thetas parameters. Cramer-Rao inequality gives a lower bound for the covariance of the estimation error when only one model is to be estimated, and the maximum likelihood estimator achieves the lower bound asymptotically, which, moreover, shrinks to zero as n rarr infin. We study the more complicated problem when a family of models f(x<sup>n</sup>; thetas<sup>1</sup>), <sub>hellip</sub> , f(x<sup>n</sup>; thetas<sup>m</sup>) is to be estimated. In fact, hypothesis testing may be viewed as such a problem. We show that there is a family of models, which we call optimally distinguishable, which can be estimated with the smallest worst case error. Moreover, if we let their number m<sub>n</sub> grow, there is a fastest growth rate such that if the number m<sub>n</sub> grows more slowly, the members can be consistently estimated; i.e. estimated without error in the limit as n rarr infin, and otherwise not. This is reminiscent of and related to Shannonpsilas channel capacity, which, however, as such cannot be applied to the problem considered.