Comparison of learning algorithms for feedforward networks in large scale networks and problems

The performance of fine learning algorithms for feedforward networks applied to several large scale experiments is evaluated and discussed. In particular, the following algorithms are compared in terms of convergence ability and speed: ALECO-2, a recently proposed constrained optimization learning algorithm, online and off-line backpropagation; Fahlman's Quickprop; and Jacob's Delta-Bar-Delta. All the above learning techniques are applied to three representative large scale benchmark training tasks (two large encoders and one large multiplexer) in a uniform way so as to guarantee fair comparison. The results of this experimental study show clearly that ALECO-2 outperforms all its rivals in terms of convergence ability and speed.