Decisor implementation in neural model selection by multiobjective optimization

This work presents a new learning scheme for improving the generalization of multilayer perceptrons (MLPs). The proposed multiobjective algorithm approach minimizes both the sum of squared error and the norm of network weight vectors to obtain the Pareto-optimal solutions. Since the Pareto-optimal solutions are not unique, we need a decision phase ("decisor") in order to choose the best one as a final solution by using a validation set. The final solution is expected to balance network variance and bias and, as a result, generates a solution with high generalization capacity, avoiding over and under fitting.