Sparse initial topologies for high order perceptrons
暂无分享,去创建一个
High order neural networks are more powerful than first order neural networks. Their main drawback is the large number of connections that they entail in their default form, which is fully interlayer connected. The methods presented here produce initial topologies that only use a very small fraction of the possible connections, yet provide a good framework for successful training of high order perceptrons. There initial topologies can be refined using ontogenic techniques that modify the topology during the training of the network. The methods are based on approximating real valued data by Boolean data, which is used as a basis for constructing the network structure. The methods are evaluated for their effectiveness in reducing the network size by comparing them to fully interlayer connected high order perceptrons and their performance is evaluated by testing the generalization capabilities of the resulting networks.
[1] Nicholas J. Redding,et al. Constructive higher-order network that is polynomial time , 1993, Neural Networks.
[2] Emile Fiesler,et al. Evaluating pruning methods , 1995 .
[3] Marco Muselli. On sequential construction of binary neural networks , 1995, IEEE Trans. Neural Networks.
[4] Emile Fiesler,et al. Neural network classification and formalization , 1994 .
[5] Emile Fiesler,et al. High-order and multilayer perceptron initialization , 1997, IEEE Trans. Neural Networks.