Functionality in neural nets

We investigate the functional capabilities of sparse networks of computing elements in accumulating knowledge through successive learning experiences. As experiences, we consider various combinations of episodic and concept learning, in supervised or unsupervised mode, of conjunctions and of disjunctions. For these we exhibit algorithms for learning in well defined senses. Each concept or episode is expressible in terms of concepts or episodes already known, and is thus learned hierarchically, without disturbing previous knowledge. Minimal assumptions are made about the computing elements, which are assumed to be classical threshold elements with states. Also we adhere to severe resource constraints. Each new concept or episode requires storage linear in the relevant parameters, and the algorithms take very few steps. We hypothesise that in our context functionality is limited more by the communication bottlenecks in the networks than by the computing capabilities of the elements and hence that this approach may prove useful in understanding biological systems even in the absence of accurate neurophysiological models.