Efficient cross-validation for feedforward neural networks

Studies the use of cross-validation for estimating the prediction risk of feedforward neural networks. In particular, the problem of variability due to the choice of random initial weights for learning is addressed. The authors demonstrate that nonlinear cross-validation may not be able to prevent the network from falling into the "wrong" perturbed local minimum. A modified approach that reduces the problem to a linear problem is proposed. It is more efficient and does not suffer from the local minimum problem. Simulation results for two regression problems are discussed.