Problem decomposition and subgoaling in artificial neural networks

A general principle of problem decomposition and subgoaling is proposed for designing an artificial neural network (ANN) and its learning algorithms. The basic idea is divide-and-conquer. The principle is explored systematically, and it is shown through several examples that it should benefit the design of ANN and its learning algorithms in general. Three types of subgoal decomposabilities are identified: serial, parallel, and diameter-limited. It is shown that the scaling-up difficulties and that of decoding what structure to use for a problem may be solved or alleviated using the subgoal decomposition principle. A learning algorithm based on the principle is developed for training multilayer perceptrons to classify any nonlinearly separable clusters. Convergence to the correct classification is guaranteed if the patterns are separable. The algorithm simultaneously learns the structure and connection weights of the network.<<ETX>>