ON THE METHOD OF PENALIZATION

In this article, we study convergence properties of the method of penal- ization and related estimates. A penalized estimate is defined as an optimizer of a scaled criterion with a penalty that penalizes undesirable properties of the parame- ters. We develop some exponential probability bounds for the penalized likelihood ratios with a general penalty. Based on these inequalities, rates of convergence of the penalized estimates can be quantified. When convergence is measured by the Hellinger distance, the rate of convergence of the penalized maximum likelihood estimate depends only on the size of the parameter space and the penalization co- efficient. We also explore the role of penalty in the penalization process, especially its relationship with the convergence properties and its connection with Bayesian analysis. We illustrate the theory by several examples.