Robustness analysis of radial base function and multi-layered feed-forward neural network models

In this paper, two popular types of neural network models (radial base function (RBF) and multi-layered feed-forward (MLF) networks) trained by the generalized delta rule, are tested on their robustness to random errors in input space. A method is proposed to estimate the sensitivity of network outputs to the amplitude of random errors in the input space, sampled from known normal distributions. An additional parameter can be extracted to give a general indication about the bias on the network predictions. The modelling performances of MLF and RBF neural networks have been tested on a variety of simulated function approximation problems. Since the results of the proposed validation method strongly depend on the configuration of the networks and the data used, little can be said about robustness as an intrinsic quality of the neural network model. However, given a data set where ‘pure’ errors from input and output space are specified, the method can be applied to select a neural network model which optimally approximates the nonlinear relations between objects in input and output space. The proposed method has been applied to a nonlinear modelling problem from industrial chemical practice. Since MLF and RBF networks are based on different concepts from biological neural processes, a brief theoretical introduction is given.

[1]  Pierre Cardaliaguet,et al.  Approximation of a function and its derivative with a neural network , 1992, Neural Networks.

[2]  Lyle H. Ungar,et al.  A NEURAL NETWORK ARCHITECTURE THAT COMPUTES ITS OWN RELIABILITY , 1992 .

[3]  Gerrit Kateman,et al.  Neural networks used as a soft-modelling technique for quantitative description of the relation between physical structure and mechanical properties of poly(ethylene terephthalate) yarns , 1992 .

[4]  David S. Broomhead,et al.  Multivariable Functional Interpolation and Adaptive Networks , 1988, Complex Syst..

[5]  Hideaki Sakai,et al.  A real-time learning algorithm for a multilayered neural network based on the extended Kalman filter , 1992, IEEE Trans. Signal Process..

[6]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .

[7]  S Mangrulkar,et al.  Artificial neural systems. , 1990, ISA transactions.

[8]  J.R.M. Smits,et al.  Exploring the possibilities of applying artificial neural networks on problems in analytical chemistry , 1993 .

[9]  J. D. Schaffer,et al.  Combinations of genetic algorithms and neural networks: a survey of the state of the art , 1992, [Proceedings] COGANN-92: International Workshop on Combinations of Genetic Algorithms and Neural Networks.

[10]  C. D. Gelatt,et al.  Optimization by Simulated Annealing , 1983, Science.

[11]  Jooyoung Park,et al.  Universal Approximation Using Radial-Basis-Function Networks , 1991, Neural Computation.

[12]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[13]  D. Broomhead,et al.  Radial Basis Functions, Multi-Variable Functional Interpolation and Adaptive Networks , 1988 .

[14]  Kurt Hornik,et al.  FEED FORWARD NETWORKS ARE UNIVERSAL APPROXIMATORS , 1989 .

[15]  H. C. Smit Specification and estimation of noisy analytical signals: Part I. Characterization, time invariant filtering and signal approximation , 1990 .

[16]  Bruce R. Kowalski,et al.  MARS: A tutorial , 1992 .

[17]  John Moody,et al.  Fast Learning in Networks of Locally-Tuned Processing Units , 1989, Neural Computation.