Nonseparable data models for a single-layer perceptron

This paper describes two nonseparable data models that can be used to study the convergence properties of perceptron learning algorithms. A system identification formulation generates the training signal, with an input that is a zero-mean Gaussian random vector. One model is based on a two-layer perceptron configuration, while the second model has only one layer but with a multiplicative output node. The analysis in this paper focuses on Rosenblatt's training procedure, although the approach can be applied to other learning algorithms. Some examples of the performance surfaces are presented to illustrate possible convergence points of the algorithm for both nonseparable data models.