In this paper, we investigate a form of modular neural network for classification with a pre-separated input vectors entering its specialist expert networks, b specialist networks which are self-organized radial-basis function or self-targeted feedforward type and c which fuses or integrates the specialists with a single-layer net. When the modular architecture is applied to spatiotemporal sequences, the Specialist Nets are recurrent; specifically, we use the Input Recurrent type.The Specialist Networks SNs learn to divide their input space into a number of equivalence classes defined by self-organized clustering and learning using the statistical properties of the input domain. Once the specialists have settled in their training, the Fusion Network is trained by any supervised method to map to the semantic classes.We discuss the fact that this architecture and its training is quite distinct from the hierarchical mixture of experts HME type as well as from stacked generalization.Because the equivalence classes to which the SNs map the input vectors are determined by the natural clustering of the input data, the SNs learn rapidly and accurately. The fusion network also trains rapidly by reason of its simplicity.We argue, on theoretical grounds, that the accuracy of the system should be positively correlated to the product of the number of equivalence classes for all of the SNs.This network was applied, as an empirical test case, to the classification of melodies presented as direct audio events temporal sequences played by a human and subject, therefore, to biological variations. The audio input was divided into two modes: a frequency or pitch variation and b rhythm, both as functions of time. The results and observations show the technique to be very robust and support the theoretical deductions concerning accuracy.
[1]
Sung Yang Bang,et al.
An Efficient Method to Construct a Radial Basis Function Neural Network Classifier
,
1997,
Neural Networks.
[2]
Michael I. Jordan,et al.
Hierarchies of Adaptive Experts
,
1991,
NIPS.
[3]
Kishan G. Mehrotra,et al.
Elements of artificial neural networks
,
1996
.
[4]
Michael I. Jordan,et al.
Hierarchical Mixtures of Experts and the EM Algorithm
,
1994,
Neural Computation.
[5]
David H. Wolpert,et al.
Stacked generalization
,
1992,
Neural Networks.
[6]
Simon Haykin,et al.
Neural Networks: A Comprehensive Foundation
,
1998
.
[7]
Xin Yao,et al.
Evolving modular neural networks which generalise well
,
1997,
Proceedings of 1997 IEEE International Conference on Evolutionary Computation (ICEC '97).
[8]
Michael I. Jordan,et al.
Task Decomposition Through Competition in a Modular Connectionist Architecture: The What and Where Vision Tasks
,
1990,
Cogn. Sci..
[9]
Jeffrey L. Elman,et al.
Finding Structure in Time
,
1990,
Cogn. Sci..
[10]
B. Stein,et al.
The Merging of the Senses
,
1993
.