A neural network architecture for incremental learning

Abstract Artificial neural networks have been used as a tool for category classification. The neural network can correctly classify patterns which have already been trained. However, sometimes the neural network erroneously classifies patterns which have never been trained. The neural network must learn again to correct the errors. In this learning, the multi-layered perceptron (MLP) must learn new patterns and old patterns. The new pattern is the pattern which the MLP cannot classify correctly and the old pattern is the pattern which the MLP has already learned. So, the MLP is ineffective in computing cost due to learning the old patterns. The adaptive resonance theory (ART) model can memorize the new patterns without learning the old patterns due to incremental learning. However, it has problems with classification ability. This paper proposes a neural network architecture for incremental learning. This neural network is called ‘Neural network based on Distance between Patterns’ (NDP). The NDP has a two-layered hierarchical structure and many neurons of the radial basis function in the output layer. The NDP performs incremental learning which increases neurons in the output layer and varies the center and the gradient of the radial basis function. So, the NDP can memorize the new patterns without learning the old patterns and has superior classification ability. The NDP differs from conventional radial basis function neural networks in the area of incremental learning. In addition, this paper shows the effectiveness of the NDP in experiments on image recognition.