The authors have previously developed a sequential algorithm for training a multi-layer perceptron classifier (1993). The idea is to exploit the fact that the locations of boundary segments are local divisions. Training is achieved by updating local covariances using the recursive least squares (RLS) algorithm. The algorithm is sequential in the sense that training examples are passed only once, and the network will learn and/or expand at the arrival of each example. The major advantage in this sequential scheme is the feasibility of pipelining the training procedures in a true parallel architecture. The authors present a systolic array implementation of the sequential input space partitioning (SISP) algorithm.<<ETX>>
[1]
Graham M. Megson.
Introduction to systolic algorithm design
,
1992
.
[2]
Stewart Lawson,et al.
Wave digital filters
,
1990
.
[3]
John V. McCanny,et al.
VLSI technology and design
,
1987
.
[4]
G. E. Peterson,et al.
Control Methods Used in a Study of the Vowels
,
1951
.
[5]
M. Niranjan,et al.
A Dynamic Neural Network Architecture by Sequential Partitioning of the Input Space
,
1994,
Neural Computation.
[6]
S. Kung,et al.
VLSI Array processors
,
1985,
IEEE ASSP Magazine.