On Refractory Parameter of Chaotic Neurons in Incremental Learning

This paper develops the incremental learning by using chaotic neurons, which is called “on-demand learning” at its developing time. The incremental learning unites the learning process and the recall process in the associative memories. This learning method uses the features of the chaotic neurons which were first developed by Prof. Aihara. The features include the spatio-temporal sum of the inputs and the refractoriness in neurons. Because of the temporal sum of the inputs, the network learns from inputs with noises. But, it is not obvious that the refractoriness is needed to the incremental learning. In this paper, the computer simulations investigate how the refractoriness takes an important part in the incremental learning. The results of the simulations, show that the refractoriness is an essential factor, but that strong refractoriness causes failures to learn patterns.

[1]  M. Watanabe,et al.  Automatic learning in chaotic neural network , 1994, ETFA '94. 1994 IEEE Symposium on Emerging Technologies and Factory Automation. (SEIKEN) Symposium) -Novel Disciplines for the Next Century- Proceedings.

[2]  Masafumi Hagiwara,et al.  Successive learning in hetero-associative memories using chaotic neural networks , 1998, 1998 IEEE International Joint Conference on Neural Networks Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98CH36227).

[3]  Kazuyuki Aihara,et al.  Automatic learning in chaotic neural networks , 1996 .

[4]  K. Aihara,et al.  Chaotic neural networks , 1990 .

[5]  Masafumi Hagiwara,et al.  Successive Learning in Hetero-Associative Memory Using Chaotic Neural Networks , 1999, Int. J. Neural Syst..