On the Approximation by Single Hidden Layer Feed-forward Neural Networks With Fixed Weights

Single hidden layer feedforward neural networks (SLFNs) with fixed weights possess the universal approximation property provided that approximated functions are univariate. But this phenomenon does not lay any restrictions on the number of neurons in the hidden layer. The more this number, the more the probability of the considered network to give precise results. In this note, we constructively prove that SLFNs with the fixed weight 1 and two neurons in the hidden layer can approximate any continuous function on a compact subset of the real line. The proof is implemented by a step by step construction of a universal sigmoidal activation function. This function has nice properties such as computability, smoothness and weak monotonicity. The applicability of the obtained result is demonstrated in various numerical examples. Finally, we show that SLFNs with fixed weights cannot approximate all continuous multivariate functions.

[1]  Gianluca Vinti,et al.  Convergence for a family of neural network operators in Orlicz spaces , 2017 .

[2]  Ken-ichi Funahashi,et al.  On the approximate realization of continuous mappings by neural networks , 1989, Neural Networks.

[3]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..

[4]  C. Chui,et al.  Approximation by ridge functions and neural networks with one hidden layer , 1992 .

[5]  Feilong Cao,et al.  The construction and approximation for feedforword neural networks with fixed weights , 2010, 2010 International Conference on Machine Learning and Cybernetics.

[6]  Gianluca Vinti,et al.  Pointwise and uniform approximation by multivariate neural network operators of the max-product type , 2016, Neural Networks.

[7]  C. Micchelli,et al.  Approximation by superposition of sigmoidal and radial basis functions , 1992 .

[8]  Sorin Draghici,et al.  On the capabilities of neural networks using limited precision weights , 2002, Neural Networks.

[9]  D. Costarelli,et al.  Constructive Approximation by Superposition of Sigmoidal Functions , 2013 .

[10]  Allan Pinkus,et al.  Approximation theory of the MLP model in neural networks , 1999, Acta Numerica.

[11]  Halbert White,et al.  Approximating and learning unknown mappings using multilayer feedforward networks with bounded weights , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[12]  Bum Il Hong,et al.  An approximation by neural networkswith a fixed weight , 2004 .

[13]  A. Pinkus,et al.  Fundamentality of Ridge Functions , 1993 .

[14]  Feilong Cao,et al.  The approximation operators with sigmoidal functions , 2009, Comput. Math. Appl..

[15]  Yoshifusa Ito,et al.  Approximation of continuous functions on Rd by linear combinations of shifted rotations of a sigmoid function with and without scaling , 1992, Neural Networks.

[16]  Allan Pinkus,et al.  Multilayer Feedforward Networks with a Non-Polynomial Activation Function Can Approximate Any Function , 1991, Neural Networks.

[17]  Vugar E. Ismailov,et al.  Approximation by ridge functions and neural networks with a bounded number of neurons , 2015 .

[18]  Allan Pinkus,et al.  Lower bounds for approximation by MLP neural networks , 1999, Neurocomputing.

[19]  Gianluca Vinti,et al.  Max-product neural network and quasi-interpolation operators activated by sigmoidal functions , 2016, J. Approx. Theory.

[20]  Gianluca Vinti,et al.  Approximation by Max-Product Neural Network Operators of Kantorovich Type , 2016 .

[21]  Chen Yu,et al.  Neural Networks with Limited Precision Weights and Its Application in Embedded Systems , 2010, 2010 Second International Workshop on Education Technology and Computer Science.

[22]  Hong Chen,et al.  Approximations of continuous functionals by neural networks with application to dynamic systems , 1993, IEEE Trans. Neural Networks.

[23]  Shao-Bo Lin Limitations of shallow nets approximation , 2017, Neural Networks.

[24]  Namig J. Guliyev,et al.  A Single Hidden Layer Feedforward Network with Only One Neuron in the Hidden Layer Can Approximate Any Univariate Function , 2015, Neural Computation.

[25]  Jürgen Schmidhuber,et al.  Deep learning in neural networks: An overview , 2014, Neural Networks.

[26]  Vugar E. Ismailov,et al.  Approximation by neural networks with weights varying on a finite set of directions , 2012 .

[27]  Shu-Cherng Fang,et al.  A neural network model with bounded-weights for pattern classification , 2004, Comput. Oper. Res..

[28]  Danilo Costarelli,et al.  Neural network operators: Constructive interpolation of multivariate functions , 2015, Neural Networks.

[29]  P. C. Sikkema Der Wert einiger Konstanten in der Theorie der Approximation mit Bernstein-Polynomen , 1961 .

[30]  Kurt Hornik,et al.  Approximation capabilities of multilayer feedforward networks , 1991, Neural Networks.

[31]  V. Ismailov,et al.  Approximation by sums of ridge functions with fixed directions , 2017 .

[32]  T. Poggio,et al.  Deep vs. shallow networks : An approximation theory perspective , 2016, ArXiv.

[33]  Nikolay Kyurkchiev,et al.  On the approximation of the step function by some sigmoid functions , 2017, Math. Comput. Simul..

[34]  H. White,et al.  There exists a neural network that does not make avoidable mistakes , 1988, IEEE 1988 International Conference on Neural Networks.

[35]  Herbert S. Wilf,et al.  Recounting the Rationals , 2000, Am. Math. Mon..

[36]  Vera Kurková Lower Bounds on Complexity of Shallow Perceptron Networks , 2016, EANN.

[37]  Zongben Xu,et al.  Approximation by neural networks with scattered data , 2013, Appl. Math. Comput..

[38]  Neil E. Cotter,et al.  The Stone-Weierstrass theorem and its application to neural networks , 1990, IEEE Trans. Neural Networks.

[39]  Vera Kurková,et al.  Kolmogorov's theorem and multilayer neural networks , 1992, Neural Networks.

[40]  A. Pinkus Ridge Functions: Approximation Algorithms , 2015 .

[41]  E. Savaş,et al.  Measure Theoretic Results for Approximation by Neural Networks with Limited Weights , 2017, ArXiv.

[42]  Xin Li,et al.  Limitations of the approximation capabilities of neural networks with one hidden layer , 1996, Adv. Comput. Math..