The pandemonium system of reflective agents

The Pandemonium system of reflective MINOS agents solves problems by automatic dynamic modularization of the input space. The agents contain feedforward neural networks which adapt using the backpropagation algorithm. We demonstrate the performance of Pandemonium on various categories of problems. These include learning continuous functions with discontinuities, separating two spirals, learning the parity function, and optical character recognition. It is shown how strongly the advantages gained from using a modularization technique depend on the nature of the problem. The superiority of the Pandemonium method over a single net on the first two test categories is contrasted with its limited advantages for the second two categories. In the first case the system converges quicker with modularization and is seen to lead to simpler solutions. For the second case the problem is not significantly simplified through flat decomposition of the input space, although convergence is still quicker.

[1]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[2]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[3]  Geoffrey E. Hinton,et al.  Learning distributed representations of concepts. , 1989 .

[4]  Hecht-Nielsen Theory of the backpropagation neural network , 1989 .

[5]  O. G. Selfridge,et al.  Pandemonium: a paradigm for learning , 1988 .

[6]  F. J. Śmieja,et al.  Multiple Network Systems (Minos) Modules: Task Division and Module Discrimination , 1991 .

[7]  Vera Kurková,et al.  Kolmogorov's theorem and multilayer neural networks , 1992, Neural Networks.

[8]  Oliver G. Selfridge,et al.  Pattern recognition by machine , 1960 .

[9]  Heinz Mühlenbein,et al.  Limitations of multi-layer perceptron networks-steps towards genetic neural networks , 1990, Parallel Comput..

[10]  Kazuki Joe,et al.  Construction of a Large-scale Neural Network: Simulation of Handwritten Japanese Character Recognition on NCUBE , 1990, Concurr. Pract. Exp..

[11]  Elie Bienenstock,et al.  Neural Networks and the Bias/Variance Dilemma , 1992, Neural Computation.

[12]  Lars Kai Hansen,et al.  Neural Network Ensembles , 1990, IEEE Trans. Pattern Anal. Mach. Intell..

[13]  K. Lang,et al.  Learning to tell two spirals apart , 1988 .

[14]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[15]  Robert Hecht-Nielsen,et al.  Theory of the backpropagation neural network , 1989, International 1989 Joint Conference on Neural Networks.