Relabeling exchange method (REM) for learning in neural networks

The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.

[1]  Dekang Lin,et al.  Learning and generalization in logic trees , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[2]  Richard O. Duda,et al.  Pattern classification and scene analysis , 1974, A Wiley-Interscience publication.

[3]  Josef Kittler,et al.  A new approach to feature selection based on the Karhunen-Loeve expansion , 1973, Pattern Recognit..

[4]  C. D. Gelatt,et al.  Optimization by Simulated Annealing , 1983, Science.

[5]  R. Mammone,et al.  Neural tree networks , 1992 .