Faster, higher-quality training of feedforward neural networks by select updating

A new training method for feedforward neural networks is presented which exploits results from matrix perturbation theory for significant training time improvement. This theory is used to assess the effect of a particular training pattern on the weight estimates prior to its inclusion in any iteration. Data which do not significantly change the weights are not used in that iteration obviating the computation expense of updating.<<ETX>>