Using Multiple Node Types to Improve the Performance of DMP

This paper discusses a method for training multi-layer perceptron networks called DMP2 (Dynamic Multi-layer Perceptron 2). The method is based upon a divide and conquer approach which builds networks in the form of binary trees, dynamically allocating nodes and layers as needed. The focus of this paper is on the effects of using multiple node types within the DMP framework. Simulation results show that DMP2 performs favorably in comparison with other learning algorithms, and that using multiple node types can be beneficial to network performance.

[1]  Tony R. Martinez,et al.  A Self-Adjusting Dynamic Logic Module , 1991, J. Parallel Distributed Comput..

[2]  Eric B. Bartlett,et al.  Dynamic node architecture learning: An information theoretic approach , 1994, Neural Networks.

[3]  F ROSENBLATT,et al.  The perceptron: a probabilistic model for information storage and organization in the brain. , 1958, Psychological review.

[4]  T. L. Andersen,et al.  A provably convergent dynamic training method for multi-layer perceptron networks , 1995, The Second International Symposium on Neuroinformatics and Neurocomputers.

[5]  Christian Lebiere,et al.  The Cascade-Correlation Learning Architecture , 1989, NIPS.

[6]  Tony R. Martinez,et al.  Adaptive Parallel Logic Networks , 1988, J. Parallel Distributed Comput..

[7]  J. Ross Quinlan,et al.  Induction of Decision Trees , 1986, Machine Learning.

[8]  Mahmood R. Azimi-Sadjadi,et al.  Recursive dynamic node creation in multilayer neural networks , 1993, IEEE Trans. Neural Networks.