A sparse matrix approach to neural network training

A new training technique, based on sparse matrix concept is developed for the training of multilayer perceptron. The proposed approach exploits the patterns of neuron activations in neural networks and substantially reduces the amount of computations in backpropagation. The proposed training algorithm is applied to word recognition with TI20 real speech data. Compared to techniques without using the sparse concept, same or better recognition accuracy is achieved and training speed is substantially improved.

[1]  Anthony Kuh,et al.  A combined self-organizing feature map and multilayer perceptron for isolated word recognition , 1992, IEEE Trans. Signal Process..

[2]  Richard P. Brent,et al.  Fast training algorithms for multilayer neural nets , 1991, IEEE Trans. Neural Networks.

[3]  Russell Reed,et al.  Pruning algorithms-a survey , 1993, IEEE Trans. Neural Networks.

[4]  Nelson Morgan Big dumb neural nets: a working brute force approach to speech recognition , 1994, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94).

[5]  Amir F. Atiya,et al.  An accelerated learning algorithm for multilayer perceptron networks , 1994, IEEE Trans. Neural Networks.

[6]  J. Watada,et al.  Enhanced back-propagation learning and its application to business evaluation , 1994, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94).

[7]  I. Duff A survey of sparse matrix research , 1977, Proceedings of the IEEE.

[8]  S. Ergezinger,et al.  An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer , 1995, IEEE Trans. Neural Networks.

[9]  S. Hyakin,et al.  Neural Networks: A Comprehensive Foundation , 1994 .

[10]  R.J.F. Dow,et al.  Neural net pruning-why and how , 1988, IEEE 1988 International Conference on Neural Networks.