On the application of orthogonal transformation for the design and analysis of feedforward networks

Orthogonal transformation, which can lead to compaction of information, has been used in two ways to optimize on the size of feedforward networks: 1) through the selection of optimum set of time-domain inputs, and the optimum set of links and nodes within a neural network (NN); and 2) through the orthogonalization of the data to be used in NN's, in case of processes with periodicity. The proposed methods are efficient and are also extremely robust numerically. The singular value decomposition (SVD) and QR with column pivoting factorization (QRcp) are the transformations used. SVD mainly serves as the null space detector; QRcp coupled with SVD is used for subset selection, which is one of the main operations on which the design of the optimal network is based. SVD has also been used to devise a new approach for the assessment of the convergence of the NN's, which is an alternative to the conventional output error analysis.

[1]  Heidar A. Malki,et al.  Using the Karhunen-Loe've transformation in the back-propagation training algorithm , 1991, IEEE Trans. Neural Networks.

[2]  A. Lapedes,et al.  Nonlinear Signal Processing Using Neural Networks , 1987 .

[3]  B. G. Quinn,et al.  The determination of the order of an autoregression , 1979 .

[4]  Lennart Ljung,et al.  System Identification: Theory for the User , 1987 .

[5]  D. B. Fogel,et al.  AN INFORMATION CRITERION FOR OPTIMAL NEURAL NETWORK SELECTION , 1990, 1990 Conference Record Twenty-Fourth Asilomar Conference on Signals, Systems and Computers, 1990..

[6]  A. Laub,et al.  Numerical linear algebra aspects of control design computations , 1985, IEEE Transactions on Automatic Control.

[7]  C. D. Beaumont,et al.  Regression Diagnostics — Identifying Influential Data and Sources of Collinearity , 1981 .

[8]  John E. Moody,et al.  Fast Pruning Using Principal Components , 1993, NIPS.

[9]  G. Stewart Collinearity and Least Squares Regression , 1987 .

[10]  Gwilym M. Jenkins,et al.  Time series analysis, forecasting and control , 1971 .

[11]  H. Akaike A new look at the statistical model identification , 1974 .

[12]  Ritei Shibata,et al.  6 Various model selection techniques in time series analysis , 1985 .

[13]  E. Hannan,et al.  The determination of optimum structures for the state space representation of multivariate stochastic processes , 1982 .

[14]  Sarbani Palit,et al.  The singular value decomposition - Applied in the modelling and prediction of quasiperiodic processes , 1994, Signal Process..

[15]  L. Glass,et al.  Oscillation and chaos in physiological control systems. , 1977, Science.

[16]  L. Chua,et al.  Chaos: A tutorial for engineers , 1987, Proceedings of the IEEE.

[17]  H. Hotelling The relations of the newer multivariate statistical methods to factor analysis. , 1957 .

[18]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[19]  A. Munot,et al.  Homogeneous Indian Monsoon rainfall: Variability and prediction , 1993, Journal of Earth System Science.

[20]  Manoel Fernando Tenorio,et al.  Self-organizing network for optimum supervised learning , 1990, IEEE Trans. Neural Networks.

[21]  Robert J. Marks,et al.  Electric load forecasting using an artificial neural network , 1991 .

[22]  Terence D. Sanger,et al.  A Tree-Structured Algorithm for Reducing Computation in Networks with Separable Basis Functions , 1991, Neural Computation.