Performance of generalized multi-layered perceptrons and layered arbitrarily connected networks trained using the Levenberg-Marquardt method

The generalized multilayer perceptron (gMLP) augments the connections in the multilayered perceptron (MLP) architecture to include all possible non-recurrent connections. The layered arbitrarily connected network (lACN) has connections from input nodes to output nodes in addition to the connections included in a MLP. In this paper the performance of MLP, lACN and gMLP networks trained using the Levenberg-Marquardt method are compared. A number of different function approximation tasks were examined. The effect of varying the number of hidden layer neurons, the error termination condition, and the training set size were also evaluated. The results presented here represent preliminary findings. In particular, additional testing on benchmark real data sets is needed.

[1]  T.,et al.  Training Feedforward Networks with the Marquardt Algorithm , 2004 .

[2]  Martin T. Hagan,et al.  Neural network design , 1995 .

[3]  X. Pang,et al.  Neural network design for J function approximation in dynamic programming , 1998, adap-org/9806001.

[4]  Paul J. Werbos,et al.  The Roots of Backpropagation: From Ordered Derivatives to Neural Networks and Political Forecasting , 1994 .

[5]  P. Werbos,et al.  Beyond Regression : "New Tools for Prediction and Analysis in the Behavioral Sciences , 1974 .

[6]  Okyay Kaynak,et al.  Computing Gradient Vector and Jacobian Matrix in Arbitrarily Connected Neural Networks , 2008, IEEE Transactions on Industrial Electronics.

[7]  P.J. Werbos,et al.  Efficient Learning in Cellular Simultaneous Recurrent Neural Networks - The Case of Maze Navigation Problem , 2007, 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning.

[8]  B.M. Wilamowski,et al.  Neural Network Trainer with Second Order Learning Algorithms , 2007, 2007 11th International Conference on Intelligent Engineering Systems.

[9]  B.M. Wilamowski,et al.  Method of computing gradient vector and Jacobean matrix in arbitrarily connected neural networks , 2007, 2007 IEEE International Symposium on Industrial Electronics.

[10]  Robert Kozma,et al.  Beyond Feedforward Models Trained by Backpropagation: A Practical Training Tool for a More Efficient Universal Approximator , 2007, IEEE Transactions on Neural Networks.

[11]  Mahrous Yousef Ibrahim 2009 IEEE International Conference on Industrial Technology , 2009 .

[12]  Bogdan M. Wilamowski,et al.  Solving parity-N problems with feedforward neural networks , 2003, Proceedings of the International Joint Conference on Neural Networks, 2003..

[13]  Paul J. Werbos,et al.  Backpropagation Through Time: What It Does and How to Do It , 1990, Proc. IEEE.

[14]  P. J. Werbos,et al.  Generalized maze navigation: SRN critics solve what feedforward or Hebbian nets cannot , 1996, 1996 IEEE International Conference on Systems, Man and Cybernetics. Information Intelligence and Systems (Cat. No.96CH35929).

[15]  Donald C. Wunsch,et al.  The cellular simultaneous recurrent network adaptive critic design for the generalized maze problem has a simple closed-form solution , 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium.

[16]  Mohammad Bagher Menhaj,et al.  Training feedforward networks with the Marquardt algorithm , 1994, IEEE Trans. Neural Networks.

[17]  Bogdan M. Wilamowski Neural network architectures and learning , 2003, IEEE International Conference on Industrial Technology, 2003.