Scale equalization higher-order neural networks

This paper presents a novel approach, called scale equalization (SE), to implement higher-order neural networks. SE is particularly useful in eliminating the scale divergence problem commonly encountered in higher order networks. Generally, the larger the scale divergence is, the more the number of training steps required to complete the training process. Effectiveness of SE is illustrated with an exemplar higher-order network built on the Sigma-Pi network (SESPN) applied to function approximation. SESPN requires the same computation time as SPN per epoch, but it takes much less number of epochs to compete the training process. Empirical results are provided to verify that SESPN outperforms other higher-order neural networks in terms of computation efficiency.

[1]  Robert M. Gray,et al.  An Algorithm for Vector Quantizer Design , 1980, IEEE Trans. Commun..

[2]  Colin Giles,et al.  Learning, invariance, and generalization in high-order neural networks. , 1987, Applied optics.

[3]  Shu-Xia Lu,et al.  A comparison among four SVM classification methods: LSVM, NLSVM, SSVM and NSVM , 2004, Proceedings of 2004 International Conference on Machine Learning and Cybernetics (IEEE Cat. No.04EX826).

[4]  Joydeep Ghosh,et al.  The pi-sigma network: an efficient higher-order neural network for pattern classification and function approximation , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[5]  P. Dubois,et al.  Where is SMC 1 , 1993 .

[6]  Joydeep Ghosh,et al.  Ridge polynomial networks , 1995, IEEE Trans. Neural Networks.

[7]  D. Broomhead,et al.  Radial Basis Functions, Multi-Variable Functional Interpolation and Adaptive Networks , 1988 .

[8]  G.B. Akar,et al.  Face classification with support vector machine , 2004, Proceedings of the IEEE 12th Signal Processing and Communications Applications Conference, 2004..

[9]  David S. Broomhead,et al.  Multivariable Functional Interpolation and Adaptive Networks , 1988, Complex Syst..

[10]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[11]  Hiroshi Sako,et al.  Effects of classifier structures and training regimes on integrated segmentation and recognition of handwritten numeral strings , 2004, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  Bernd Fritzke,et al.  Growing cell structures--A self-organizing network for unsupervised and supervised learning , 1994, Neural Networks.

[13]  David E. Rumelhart,et al.  Product Units: A Computationally Powerful and Biologically Plausible Extension to Backpropagation Networks , 1989, Neural Computation.

[14]  Teuvo Kohonen,et al.  Self-organized formation of topologically correct feature maps , 2004, Biological Cybernetics.

[15]  Federico Girosi,et al.  An improved training algorithm for support vector machines , 1997, Neural Networks for Signal Processing VII. Proceedings of the 1997 IEEE Signal Processing Society Workshop.

[16]  A. G. Ivakhnenko,et al.  Polynomial Theory of Complex Systems , 1971, IEEE Trans. Syst. Man Cybern..

[17]  Philipp Slusallek,et al.  Introduction to real-time ray tracing , 2005, SIGGRAPH Courses.