Improving generalization of MLPs with sliding mode control and the Levenberg-Marquardt algorithm

A variation of the well-known Levenberg-Marquardt for training neural networks is proposed in this work. The algorithm presented restricts the norm of the weights vector to a preestablished norm value and finds the minimum error solution for that norm value. The norm constrain controls the neural networks degree of freedom. The more the norm increases, the more flexible is the neural model. Therefore, more fitted to the training set. A range of different norm solutions is generated and the best generalization solution is selected according to the validation set error. The results show the efficiency of the algorithm in terms of generalization performance.