Limiting the Risk of Bayes and Empirical Bayes Estimators—Part I: The Bayes Case

Abstract The first part of this article considers the Bayesian problem of estimating the mean, θ, of a normal distribution when the mean itself has a normal prior. The usual Bayes estimator for this situation has high risk if θ is far from the mean of the prior distribution. We suggest rules which do not have this bad property and still perform well against the normal prior. These rules are compromises between the Bayes rule and the MLE. Similar rules are suggested for the empirical Bayes situation where the mean and variance of the prior is unknown but can be estimated from the data provided by several simultaneous estimation problems. In this case the suggested rules compromise between the James-Stein estimator of a mean vector and the MLE.