Abstract Artificial neural networks have been used as a tool for category classification. The neural network can correctly classify patterns which have already been trained. However, sometimes the neural network erroneously classifies patterns which have never been trained. The neural network must learn again to correct the errors. In this learning, the multi-layered perceptron (MLP) must learn new patterns and old patterns. The new pattern is the pattern which the MLP cannot classify correctly and the old pattern is the pattern which the MLP has already learned. So, the MLP is ineffective in computing cost due to learning the old patterns. The adaptive resonance theory (ART) model can memorize the new patterns without learning the old patterns due to incremental learning. However, it has problems with classification ability. This paper proposes a neural network architecture for incremental learning. This neural network is called ‘Neural network based on Distance between Patterns’ (NDP). The NDP has a two-layered hierarchical structure and many neurons of the radial basis function in the output layer. The NDP performs incremental learning which increases neurons in the output layer and varies the center and the gradient of the radial basis function. So, the NDP can memorize the new patterns without learning the old patterns and has superior classification ability. The NDP differs from conventional radial basis function neural networks in the area of incremental learning. In addition, this paper shows the effectiveness of the NDP in experiments on image recognition.
[1]
Donald F. Specht,et al.
Probabilistic neural networks
,
1990,
Neural Networks.
[2]
John Moody,et al.
Fast Learning in Networks of Locally-Tuned Processing Units
,
1989,
Neural Computation.
[3]
Geoffrey E. Hinton,et al.
Learning internal representations by error propagation
,
1986
.
[4]
A. A. Mullin,et al.
Principles of neurodynamics
,
1962
.
[5]
T Poggio,et al.
Regularization Algorithms for Learning That Are Equivalent to Multilayer Networks
,
1990,
Science.
[6]
James D. Keeler,et al.
Layered Neural Networks with Gaussian Hidden Units as Universal Approximations
,
1990,
Neural Computation.
[7]
Stephen Grossberg,et al.
ART 2-A: An adaptive resonance algorithm for rapid category learning and recognition
,
1991,
Neural Networks.
[8]
Stephen Grossberg,et al.
ARTMAP: supervised real-time learning and classification of nonstationary data by a self-organizing neural network
,
1991,
[1991 Proceedings] IEEE Conference on Neural Networks for Ocean Engineering.
[9]
Chris Bishop,et al.
Improving the Generalization Properties of Radial Basis Function Neural Networks
,
1991,
Neural Computation.