Empirical bias-reducing adjustments to estimating functions
暂无分享,去创建一个
We develop a novel, general framework for the asymptotic reduction of the bias of $M$-estimators from unbiased estimating functions. The framework relies on additive, empirical adjustments to the estimating functions that depend only on the first two derivatives of the contributions to the estimating functions. The new estimation method has markedly broader applicability than previous bias-reduction methods by applying to models that are either partially-specified or that have a likelihood that is intractable or expensive to compute, and a surrogate objective is employed. The method also offers itself to easy, general implementations for arbitrary models by using automatic differentiation. This is in contrast to other popular bias-reduction methods that require either resampling or evaluation of expectations of products of log-likelihood derivatives. If $M$-estimation is by the maximization of an objective function, then, reduced-bias $M$-estimation can be achieved by maximizing an appropriately penalized objective. That penalized objective relates closely to information criteria based on the Kullback-Leibler divergence, establishing, for the first time, a strong link between reduction of estimation bias and model selection. The reduced-bias $M$-estimators are found to have the same asymptotic distribution, and, hence, the same asymptotic efficiency properties as the original $M$-estimators, and we discuss inference and model selection with reduced-bias $M$-estimates. The properties of reduced-bias $M$-estimation are illustrated in well-used, important modelling settings of varying complexity.
[1] S. Padoan,et al. Likelihood-Based Inference for Max-Stable Processes , 2009, 0902.3060.
[2] C. Varin,et al. A mixed autoregressive probit model for ordinal longitudinal data. , 2010, Biostatistics.