Abstract A parallel processing network derived from Kanerva's associative memory theory Kanerva 1984 is shown to be able to train rapidly on connected speech data and recognize further speech data with a label error rate of 0·68%. This modified Kanerva model can be trained substantially faster than other networks with comparable pattern discrimination properties. Kanerva presented his theory of a self-propagating search in 1984, and showed theoretically that large-scale versions of his model would have powerful pattern matching properties. This paper describes how the design for the modified Kanerva model is derived from Kanerva's original theory. Several designs are tested to discover which form may be implemented fastest while still maintaining versatile recognition performance. A method is developed to deal with the time varying nature of the speech signal by recognizing static patterns together with a fixed quantity of contextual information. In order to recognize speech features in different contexts it is necessary for a network to be able to model disjoint pattern classes. This type of modelling cannot be performed by a single layer of links. Network research was once held back by the inability of single-layer networks to solve this sort of problem, and the lack of a training algorithm for multi-layer networks. Rumelhart, Hinton & Williams 1985 provided one solution by demonstrating the “back propagation” training algorithm for multi-layer networks. A second alternative is used in the modified Kanerva model. A non-linear fixed transformation maps the pattern space into a space of higher dimensionality in which the speech features are linearly separable. A single-layer network may then be used to perform the recognition. The advantage of this solution over the other using multi-layer networks lies in the greater power and speed of the single-layer network training algorithm.
[1]
D Zipser,et al.
Learning the hidden structure of speech.
,
1988,
The Journal of the Acoustical Society of America.
[2]
James D. Keeler,et al.
Comparison between sparsely distributed memory and Hopfield-type neural network models
,
1986
.
[3]
Geoffrey E. Hinton,et al.
Learning internal representations by error propagation
,
1986
.
[4]
J. Keeler.
Information capacity of outer-product neural networks
,
1987
.
[5]
D. Marr.
A theory of cerebellar cortex
,
1969,
The Journal of physiology.
[6]
Bernard Widrow,et al.
Adaptive switching circuits
,
1988
.