The uniqueness Theorem for Complex-Valued Neural Networks with Threshold Parameters and the Redundancy of the Parameters

This paper will prove the uniqueness theorem for 3-layered complex-valued neural networks where the threshold parameters of the hidden neurons can take non-zeros. That is, if a 3-layered complex-valued neural network is irreducible, the 3-layered complex-valued neural network that approximates a given complex-valued function is uniquely determined up to a finite group on the transformations of the learnable parameters of the complex-valued neural network.

[1]  Danilo P. Mandic,et al.  Relating the Slope of the Activation Function and the Learning Rate Within a Recurrent Neural Network , 1999, Neural Computation.

[2]  Aapo Hyvärinen,et al.  A Fast Fixed-Point Algorithm for Independent Component Analysis of Complex Valued Signals , 2000, Int. J. Neural Syst..

[3]  Akira Watanabe,et al.  A Method to Interpret 3D Motions Using Neural Networks (Special Section on Information Theory and Its Applications) , 1994 .

[4]  Danilo P. Mandic,et al.  Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability , 2001 .

[5]  Tülay Adali,et al.  Approximation by Fully Complex Multilayer Perceptrons , 2003, Neural Computation.

[6]  Tohru Nitta,et al.  Orthogonality of Decision Boundaries in Complex-Valued Neural Networks , 2004, Neural Computation.

[7]  Sumio Watanabe,et al.  Algebraic Analysis for Nonidentifiable Learning Machines , 2001, Neural Computation.

[8]  Akira Hirose,et al.  Continuous complex-valued back-propagation learning , 1992 .

[9]  Shun-ichi Amari,et al.  Singularities Affect Dynamics of Learning in Neuromanifolds , 2006, Neural Comput..

[10]  Sumio Watanabe Algebraic Analysis for Non-identifiable Learning Machines , 2000 .

[11]  Katsuyuki Hagiwara,et al.  On the problem of applying AIC to determine the structure of a layered feedforward neural network , 1993, Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan).

[12]  Héctor J. Sussmann,et al.  Uniqueness of the weights for minimal feedforward nets with a given input-output map , 1992, Neural Networks.

[13]  Jean-François Cardoso,et al.  Equivariant adaptive source separation , 1996, IEEE Trans. Signal Process..

[14]  Sumio Watanabe Algebraic Analysis for Non-regular Learning Machines , 1999, NIPS.

[15]  K. Fukumizu Likelihood ratio of unidentifiable models and multilayer neural networks , 2003 .

[16]  Akira Hirose,et al.  Dynamics of fully complex-valued neural networks , 1992 .

[17]  Emile Fiesler,et al.  The Interchangeability of Learning Rate and Gain in Backpropagation Neural Networks , 1996, Neural Computation.

[18]  Tohru Nitta,et al.  An Extension of the Back-Propagation Algorithm to Complex Numbers , 1997, Neural Networks.

[19]  Eitaro Aiyoshi,et al.  Approximation and Designing of Fractal Images by Complex Neural Networks , 2003 .

[20]  Danilo P. Mandic,et al.  Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability , 2001 .

[21]  Kenji Fukumizu,et al.  Local minima and plateaus in hierarchical structures of multilayer perceptrons , 2000, Neural Networks.