A study of the optimality of approximate maximum likelihood estimation

Maximum Likelihood Estimation (MLE) is widely utilized in the computer vision literature as a means of solving parameter estimation problems assuming a Gaussian noise model for the measurement data. In order to solve a MLE problem it is necessary to have knowledge of the true parameters of the Gaussian noise model. Since this knowledge is unobtainable in practical setting approximate MLE has become a popular alternative. The theory behind the approximate MLE framework is presented and an analysis of the bias characteristics of the method for noisy data is performed. Several experiments are performed to ascertain the optimality of approximate MLE solutions and to determine whether or not there is a correlation between the degree and dimension of the algebraic hypersurface and optimality of the error metric.