Incremental B ackpropagation Learning Networks

How to learn new knowledge without forgetting old knowledge is a key issue in designing an incremental-learning neural network. In this paper, we present a new incremental learning method for pattern recognition, called the "incremental backpropagation learning network," which employs bounded weight modification and structural adaptation learning rules and applies initial knowledge to constrain the learning process. The viability of this approach is demonstrated for classification problems including the iris and the promoter domains. N incremental learning system updates its hypotheses as A a new instance arrives without reexamining old instances. In other words, an incremental-learning system learns Y based on X, then learns Z based on Y, and so on. Such a learning strategy is both spatially and temporally economical since it need not store and reprocess old instances. It is especially crucial for a learning system which continually receives input and must process it in a real-time manner. Also, learning based on a single instance has been an important topic in machine learning. At this point, humans appear to learn better than machines from single instances. The backpropagation learning network is not incremental in nature. Suppose it was trained on instance set A and then retrained on set B, its knowledge about set A may be lost. To learn a new instance while keeping old memory, the backpropagation network has to be trained on the new instance along with old instances. In the case of nonstationary data statistics, the network should be adapted to the new instance and preserve previous knowledge if it is not in conflict with that instance. A technique is to minimize the network output error with respect to old instances subject to the approximation of the network output to the desired output of the new instance (ll). This is not an incremental learning technique since old instances should still be reexamined. In this paper, we present a new incremental learning method for pattern recognition, called the incremental backpropagation learning network (IBPLN), which employs bounded weight modification and structural adaptation learning rules. Then experimental results are described.