A class of local likelihood methods and near‐parametric asymptotics

The local maximum likelihood estimate θ^t of a parameter in a statistical model f(x, θ) is defined by maximizing a weighted version of the likelihood function which gives more weight to observations in the neighbourhood of t. The paper studies the sense in which f(t, θ^t) is closer to the true distribution g(t) than the usual estimate f(t, θ^) is. Asymptotic results are presented for the case in which the model misspecification becomes vanishingly small as the sample size tends to ∞. In this setting, the relative entropy risk of the local method is better than that of maximum likelihood. The form of optimum weights for the local likelihood is obtained and illustrated for the normal distribution.