Regularization of a Programmed Recurrent Artificial Neural Network

A method is developed for manually constructing recurrent artificial neural networks to model the fusion of experimental data and mathematical models of physical systems. The construction requires the use of Generalized Tikhonov Regularization (GTR) and imposing certain constraints on the values of the input, bias, and output weights. The attribution of certain roles to each of these parameters allows for mapping a polynomial approximation into an artificial neural network architecture. GTR provides a rational means of combining theoretical models, computational data, and experimental measurements into a global representation of a domain. Attention is focused on a second-order nonlinear ordinary differential equation, which governs the classic Duffing’s oscillator. The nonlinear ordinary differential equation is modelled by the recurrent artificial neural network architecture in conjunction with the popular hyperbolic tangent transfer function. GTR is then used to smoothly merge the response of the RANN and experimental data. Moreover, this approach is shown to be capable of incorporating other smooth neuron transfer functions, as long as they can be described by a Taylor series expansion. A numerical example is presented illustrating the accuracy and utility of the method.

[1]  F. Girosi,et al.  Networks for approximation and learning , 1990, Proc. IEEE.

[2]  Ronald J. Williams,et al.  A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.

[3]  Allan Pinkus,et al.  Multilayer Feedforward Networks with a Non-Polynomial Activation Function Can Approximate Any Function , 1991, Neural Networks.

[4]  M. Ulbrich A Generalized Tikhonov Regularization for Nonlinear Inverse Ill-Posed Problems , 1998 .

[5]  H. N. Mhaskar,et al.  Neural Networks for Optimal Approximation of Smooth and Analytic Functions , 1996, Neural Computation.

[6]  Earl H. Dowell,et al.  On the Understanding of Chaos in Duffings Equation Including a Comparison With Experiment , 1986 .

[7]  F. Moon Experiments on Chaotic Motions of a Forced Nonlinear Oscillator: Strange Attractors , 1980 .

[8]  Robert P. W. Duin,et al.  Initializations, back-propagation and generalization of feed-forward classifiers , 1993, IEEE International Conference on Neural Networks.

[9]  R. Pletcher,et al.  Computational Fluid Mechanics and Heat Transfer. By D. A ANDERSON, J. C. TANNEHILL and R. H. PLETCHER. Hemisphere, 1984. 599 pp. $39.95. , 1986, Journal of Fluid Mechanics.

[10]  Y. Ueda Randomly transitional phenomena in the system governed by Duffing's equation , 1978 .

[11]  Earl H. Dowell,et al.  On chaos and fractal behavior in a generalized Duffing's system , 1988 .

[12]  Xiwu Lin,et al.  Smoothing spline ANOVA models for large data sets with Bernoulli observations and the randomized GACV , 2000 .

[13]  Fernando J. Pineda,et al.  Recurrent Backpropagation and the Dynamical Approach to Adaptive Neural Computation , 1989, Neural Computation.

[14]  Yoshifusa Ito,et al.  Representation of functions by superpositions of a step or sigmoid function and their applications to neural network theory , 1991, Neural Networks.

[15]  Kurt Hornik,et al.  Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks , 1990, Neural Networks.

[16]  G. Kane Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol 1: Foundations, vol 2: Psychological and Biological Models , 1994 .

[17]  George Cybenko,et al.  Ill-Conditioning in Neural Network Training Problems , 1993, SIAM J. Sci. Comput..

[18]  K. R. Asfar,et al.  Period doublings, bifurcations and strange attractors in a two-well potential oscillator , 1993 .

[19]  Andrew J. Meade,et al.  Integrating experimental data and mathematical models in simulation of physical systems , 1997 .

[20]  P. Holmes,et al.  A nonlinear oscillator with a strange attractor , 1979, Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences.

[21]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[22]  T. A. Zang,et al.  Spectral methods for fluid dynamics , 1987 .

[23]  Yoh-Han Pao,et al.  Stochastic choice of basis functions in adaptive function approximation and the functional-link net , 1995, IEEE Trans. Neural Networks.