Efficient incremental construction of RBF networks using quasi-gradient method

Abstract Artificial Neural Networks have been found to be very efficient universal approximators. Single Layer Feedforward Networks (SLFN) are the most popular and easy to train. The neurons in these networks can use both sigmoidal functions and radial basis functions (RBF) as activation functions. Both functions have been shown to work very efficiently. Sigmoidal networks are already very well described in the literature. This paper will focus on the construction of a SLFN architecture using RBF neurons. There are many algorithms that are used to construct or train networks to solve function approximation problems. In this paper, an algorithm which is a modification of the Incremental Extreme Learning Machine (I-ELM) family of algorithms is proposed. The proposed algorithm eliminates randomness in the learning process with respect to center positions and widths of the RBF neurons. To do this, the input with the highest error magnitude is saved during error calculation and then used as the center for the next incrementally added neuron. Then the radius of the new neuron is iteratively chosen using Nelder–Mead׳s Simplex method. This allows the universal approximation properties of I-ELM to be preserved while greatly reducing the sizes of the trained RBF networks.

[1]  Allan Pinkus,et al.  Multilayer Feedforward Networks with a Non-Polynomial Activation Function Can Approximate Any Function , 1991, Neural Networks.

[2]  Bernhard Schölkopf,et al.  A tutorial on support vector regression , 2004, Stat. Comput..

[3]  B.M. Wilamowski,et al.  Neural network architectures and learning algorithms , 2009, IEEE Industrial Electronics Magazine.

[4]  Chee Kheong Siew,et al.  Extreme learning machine: Theory and applications , 2006, Neurocomputing.

[5]  Chee Kheong Siew,et al.  Universal Approximation using Incremental Constructive Feedforward Networks with Random Hidden Nodes , 2006, IEEE Transactions on Neural Networks.

[6]  Bogdan M. Wilamowski,et al.  Nelder-mead enhanced extreme learning machine , 2013, 2013 IEEE 17th International Conference on Intelligent Engineering Systems (INES).

[7]  Guang-Bin Huang,et al.  Convex incremental extreme learning machine , 2007, Neurocomputing.

[8]  Jeffrey C. Lagarias,et al.  Convergence Properties of the Nelder-Mead Simplex Method in Low Dimensions , 1998, SIAM J. Optim..

[9]  Lei Chen,et al.  Enhanced random search based incremental extreme learning machine , 2008, Neurocomputing.

[10]  Jooyoung Park,et al.  Universal Approximation Using Radial-Basis-Function Networks , 1991, Neural Computation.

[11]  Hao Yu,et al.  Advantages of Radial Basis Function Networks for Dynamic System Design , 2011, IEEE Transactions on Industrial Electronics.

[12]  John E. Dennis,et al.  Multidirectional search: a direct search algorithm for parallel machines , 1989 .

[13]  Chyun-Chau Fuh,et al.  Adaptive control and synchronization of a class of chaotic systems in which all parameters are unknown , 2013, 2013 IEEE Symposium on Computational Intelligence in Control and Automation (CICA).

[14]  John A. Nelder,et al.  A Simplex Method for Function Minimization , 1965, Comput. J..

[15]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[16]  K. Koohestani A computational framework for the form-finding and design of tensegrity structures , 2013 .

[17]  Kurt Hornik,et al.  Approximation capabilities of multilayer feedforward networks , 1991, Neural Networks.

[18]  Raimondo Betti,et al.  Simultaneous identification of structural parameters and dynamic input with incomplete output‐only measurements , 2014 .

[19]  Bogdan M. Wilamowski,et al.  Improved nelder mead's simplex method and applications , 2012 .