Levenberg-Marquardt Learning and Regularization

| Levenberg-Marquardt Learning was rst introduced to the feedforward networks to improve the speed of the training. This method is an improved Guass-Newton method which has an extra term to prevent the cases of ill-conditions. Interestingly, if we regard the learning as a constrained least square method, that extra term becomes a regularization term to deal with the additive noise in the training samples. In this paper, we look at the Levenberg-Marquardt Learning from the viewpoint of regularization. We show that the Levenberg-Marquardt learning allows other forms of regularization operators by some simple modiications. In addition, with the inclusion of test for validation error, the regularization parameter can be chosen in such a way that both the training error and validation error decrease. Thus, it prevents the occurrence of over-training.