Boundedness and convergence of split complex gradient descent algorithm with momentum and regularizer for TSK fuzzy models

Abstract This paper investigates the split complex gradient descent based neuro-fuzzy algorithm with self-adaptive momentum and L2 regularizer for training TSK (Takagi–Sugeno–Kang) fuzzy inference models. The major threat for disposing complex data with fuzzy system is contradiction of boundedness and analyticity in the complex domain, as expressed by Liouville’s theorem. The proposed algorithm operates a couple of real-valued functions and splits the complex variables into real part and imaginary part. Dynamical momentum is included in the learning mechanism to promote learning speed. L2 regularizer is also added to control the magnitude of the weight parameters. Furthermore, a detailed convergence analysis of the proposed algorithm is fully studied. The monotonic decreasing property of the error function and convergence of the weight sequence are guaranteed. Plus a mild condition, strong convergence of the weight sequence is deduced. Finally, the simulation results are also demonstrated to verify the theoretical analysis results.

[1]  Wei Wu,et al.  Convergence of online gradient methods for Pi-Sigma neural network with a penalty term , 2013, IEEE Conference Anthology.

[2]  Sundaram Suresh,et al.  A complex-valued neuro-fuzzy inference system and its learning mechanism , 2014, Neurocomputing.

[3]  Kaspar Althoefer,et al.  Stability analysis of a three-term backpropagation algorithm , 2005, Neural Networks.

[4]  W. Rudin Real and complex analysis , 1968 .

[5]  Danilo P. Mandic,et al.  Convergence analysis of an augmented algorithm for fully complex-valued neural networks , 2015, Neural Networks.

[6]  Gaofeng Zheng,et al.  Convergence analysis of a back-propagation algorithm with adaptive momentum , 2011, Neurocomputing.

[7]  Amir Rezaei,et al.  An interval-valued fuzzy controller for complex dynamical systems with application to a 3-PSP parallel robot , 2014, Fuzzy Sets Syst..

[8]  Sundaram Suresh,et al.  A Metacognitive Complex-Valued Interval Type-2 Fuzzy Inference System , 2014, IEEE Transactions on Neural Networks and Learning Systems.

[9]  Lijun Liu,et al.  Convergence Analysis of Three Classes of Split-Complex Gradient Algorithms for Complex-Valued Recurrent Neural Networks , 2010, Neural Computation.

[10]  Yan Liu,et al.  Convergence analysis of the batch gradient-based neuro-fuzzy learning algorithm with smoothing L1/2 regularization for the first-order Takagi-Sugeno system , 2017, Fuzzy Sets Syst..

[11]  Wei Wu,et al.  Boundedness and convergence of batch back-propagation algorithm with penalty for feedforward neural networks , 2012, Neurocomputing.

[12]  Javier Echanobe,et al.  Efficient Hardware/Software Implementation of an Adaptive Neuro-Fuzzy System , 2008, IEEE Transactions on Fuzzy Systems.

[13]  D. Mandic,et al.  Complex Valued Nonlinear Adaptive Filters: Noncircularity, Widely Linear and Neural Models , 2009 .

[14]  Omid Khayat,et al.  Gradient-based back-propagation dynamical iterative learning scheme for the neuro-fuzzy inference system , 2016, Expert Syst. J. Knowl. Eng..

[15]  Chunshien Li,et al.  Function Approximation with Complex Neuro-Fuzzy System Using Complex Fuzzy Sets – A New Approach , 2011, New Generation Computing.

[16]  Jing Wang,et al.  A New Learning Algorithm for a Fully Connected Neuro-Fuzzy Inference System , 2014, IEEE Transactions on Neural Networks and Learning Systems.

[17]  Kazuyuki Murase,et al.  Quaternion neuro-fuzzy learning algorithm for generation of fuzzy rules , 2016, Neurocomputing.

[18]  Sammy Siu,et al.  Sensitivity Analysis of the Split-Complex Valued Multilayer Perceptron Due to the Errors of the i.i.d. Inputs and Weights , 2007, IEEE Transactions on Neural Networks.

[19]  Wei Wu,et al.  A modified gradient learning algorithm with smoothing L1/2 regularization for Takagi-Sugeno fuzzy models , 2014, Neurocomputing.

[20]  Ying Zhang,et al.  Boundedness and Convergence of Split-Complex Back-Propagation Algorithm with Momentum and Penalty , 2013, Neural Processing Letters.

[21]  Sammy Siu,et al.  Analysis of the Initial Values in Split-Complex Backpropagation Algorithm , 2008, IEEE Transactions on Neural Networks.

[22]  Long Li,et al.  A modified gradient-based neuro-fuzzy learning algorithm and its convergence , 2010, Inf. Sci..