The logic of connectionist systems

A connectionist system is a cellular network of adaptable nodes that has a natural propensity for storing knowledge. This emergent property is a function of a training process and a pattern of connections. Most analyses of such systems first assume an idiosyncratic specification for the nodes (often based on neuron models) and a constrained method of interconnection (reciprocity, no feedback, etc). In contrast, a general node model is assumed in this paper. It is based on a logic truth table with a probabilistic element. It is argued that this includes other definitions and leads to a general analysis of the class of connectionist systems. The analysis includes an explanation of the effect of training and testing techniques that involve the use of noise. Specifically, the paper describes a way of predicting and optimizing noise-based training by the definition of an ideal node logic which ensures the most rapid descent of the resulting probabilistic automaton into the trained stable states. ‘Hard’ learning is shown to be achievable on the notorious parity-checking problem with a level of performance that is two orders of magnitude better than other well-known error backpropagation techniques demonstrated on the same topology. It is concluded that there are two main areas of advantage in this approach. The first is the direct probabilistic automaton model that covers and explains connectionist approaches in general, and the second is the potential for high-performance implementations for such systems.