Structural Connectionist Learning with Complementary Coding

We propose a new learning algorithm, structural learning with the complementary coding for concept learning problems. We introduce the new grouping measure that forms the similarity matrix over the training set and show this similarity matrix provides a sufficient condition for the linear separability of the set. Using the sufficient condition one should figure out a suitable composition of linearly separable threshold functions that classify exactly the set of labeled vectors. In the case of the nonlinear separability, the internal representation of connectionist networks, the number of the hidden units and value-space of these units, is pre-determined before learning based on the structure of the similarity matrix. A three-layer neural network is then constructed where each linearly separable threshold function is computed by a linear-threshold unit whose weights are determined by the one-shot learning algorithm that requires a single presentation of the training set. The structural learning algorithm proceeds to capture the connection weights so as to realize the pre-determined internal representation. The pre-structured internal representation, the activation value spaces at the hidden layer, defines intermediate-concepts. The target-concept is then learned as a combination of those intermediate-concepts. The ability to create the pre-structured internal representation based on the grouping measure distinguishes the structural learning from earlier methods such as backpropagation.