A Convergence Theorem for Hierarchies of Model Neurones

The threshold logic unit (T.L.U.) has been proposed as a model for a single neurone; other substantially cognate terms are “perceptron” and “adaline”. Networks of these elements have been advanced as tentative models of some aspects of brain functioning. In particular, hierarchical nets appear to exhibit a sufficient flexibility to make them interesting both as plausible models of learning in the central nervous system and also as general objects of study in connection with pattern recognition and artificial intelligence.In this paper, we discuss the well-known “perceptron convergence theorem” in a fairly general setting, and consider variations appropriate to nets of such units. A certain familiarity with the relevant chapters of Nilsson’s Learning Machines [1] and also with current mathematical formalism is presupposed.