A fast incremental learning algorithm of RBF networks with long-term memory

To avoid the catastrophic interference in incremental learning, we have proposed resource allocating network with long term memory (RAN-LTM). In RAN-LTM, not only a new training sample but also some memory items stored in long-term memory are trained based on a gradient descent algorithm is usually slow and can be easily fallen into local minima. To solve these problems, we propose a fast incremental learning algorithm of RAN-LTM, in which its centers are not trained but selected based on output errors. This model does not need so much memory capacity and it is also realizes robust incremental learning ability. To verify these characteristics of RAN/spl I.bar/LTM, we apply it to two function approximation problems: one-dimensional function approximation and prediction of Mackey-Glass time series. From the experimental results, it is verified that the proposed RAN-LTM can learn fast and accurately without large main memory unless incremental learning is conducted over long period of time.