Reducing computations in incremental learning for feedforward neural network with long-term memory

When neural networks are trained incrementally, input-output relationships that are trained formerly tend to be collapsed by the learning of new training data. This phenomenon is called "interference". To suppress the interference, we have proposed an incremental learning system (called RAN-LTM), in which long-term memory (LTM) is introduced into a resource allocating network (RAN). Since RAN-LTM needs to train not only new data but also some LTM data to suppress the interference, if many LTM data are retrieved large computations are required. Therefore, it is important to design appropriate procedures for producing and retrieving LTM data in RAN-LTM. In the paper, these procedures in the previous version of RAN-LTM are improved. In simulations, the improved RAN-LTM is applied to the approximation of a one-dimensional function, and the approximation error and the training speed are evaluated as compared with RAN and the previous RAN-LTM.

[1]  Naohiro Ishii,et al.  Incremental learning methods with retrieving of interfered patterns , 1999, IEEE Trans. Neural Networks.

[2]  M. Kotani,et al.  DETECTION OF LEAKAGE SOUND BY USING MODULAR NEURAL NETWORKS , 2000 .

[3]  Lokendra Shastri,et al.  Incremental class learning-an approach to longlife and scalable learning , 1999, IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339).

[4]  F. Girosi,et al.  Networks for approximation and learning , 1990, Proc. IEEE.

[5]  John C. Platt A Resource-Allocating Network for Function Interpolation , 1991, Neural Computation.