Sparse initial topologies for high order perceptrons

High order neural networks are more powerful than first order neural networks. Their main drawback is the large number of connections that they entail in their default form, which is fully interlayer connected. The methods presented here produce initial topologies that only use a very small fraction of the possible connections, yet provide a good framework for successful training of high order perceptrons. There initial topologies can be refined using ontogenic techniques that modify the topology during the training of the network. The methods are based on approximating real valued data by Boolean data, which is used as a basis for constructing the network structure. The methods are evaluated for their effectiveness in reducing the network size by comparing them to fully interlayer connected high order perceptrons and their performance is evaluated by testing the generalization capabilities of the resulting networks.