On neural network design — Part I: Using the MVQ algorithm

In this two-part study we present a new design methodology for neural classifiers. The design procedure utilizes a multiclass vector quantization (MVQ) algorithm for information extraction from the training set. The extracted information suffices to specify the hidden layer in a canonical neural network architecture. The extracted information also leads to the specification of neuron inhibition rules and subsequently the design of the hidden layer-to-output map. In Part I of the study we focus attention on the MVQ algorithm and how it is used to extract information from a training set. The extracted information is referred to as thecodebook. The codebook is used to directly specify the hidden layer. This specification can take the form of a perceptron layer, a radial basis layer, or a heterogeneous layer involving a mixture of neuron types. These and otherh-layer specifications are determined directly from the same extracted information. The MVQ codebook also suffices to scale the activation function of each neuron. In Part II we consider the nonsimplistic hidden layer-to-output map design. We note that the MVQ algorithm, as it extracts information, decomposes the design set into disjoint neighborhoods. For each neighborhood we identify subsets of the hidden layer neurons, which are significant sensors for the neighborhood. For each such subset we construct an output map. Inhibition rules are established to ensure that the proper output map is activated. In benchmark simulations the overall design exhibits excellent performance, to the extent that we are hard pressed to identify bounds on performance, if any.