A training algorithm with selectable search direction for complex-valued feedforward neural networks

This paper focuses on presenting an efficient training algorithm for complex-valued feedforward neural networks by utilizing a tree structure. The basic idea of the proposed algorithm is that, by introducing a set of direction factors, distinctive search directions are available to be selected at each iteration such that the objective function is reduced as much as possible. Compared with some well-known training algorithms, one of the advantages of our algorithm is that the determination of search direction is of great flexibility and thus more accurate solution is obtained with faster convergence speed. Experimental simulations on pattern recognition, channel equalization and complex function approximation are provided to verify the effectiveness and applications of the proposed algorithm.

[1]  Panayiotis E. Pintelas,et al.  A new conjugate gradient algorithm for training neural networks based on a modified secant equation , 2013, Appl. Math. Comput..

[2]  Xiaodong Liu,et al.  Convergence analysis of fully complex backpropagation algorithm based on Wirtinger calculus , 2013, Cognitive Neurodynamics.

[3]  Boris Polyak The conjugate gradient method in extremal problems , 1969 .

[4]  Prem Kumar Kalra,et al.  On Efficient Learning Machine With Root-Power Mean Neuron in Complex Domain , 2011, IEEE Transactions on Neural Networks.

[5]  Tohru Nitta,et al.  Orthogonality of Decision Boundaries in Complex-Valued Neural Networks , 2004, Neural Computation.

[6]  William W. Hager,et al.  A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search , 2005, SIAM J. Optim..

[7]  Ganapati Panda,et al.  Performance Evaluation of a New BP Algorithm for a Modified Artificial Neural Network , 2020, Neural Processing Letters.

[8]  Jijun Zhang,et al.  The technology of intelligent recognition for drilling formation based on neural network with conjugate gradient optimization and remote wireless transmission , 2020, Comput. Commun..

[9]  Ying Zhang,et al.  Boundedness and Convergence of Split-Complex Back-Propagation Algorithm with Momentum and Penalty , 2013, Neural Processing Letters.

[10]  Simone Scardapane,et al.  Complex-Valued Neural Networks With Nonparametric Activation Functions , 2018, IEEE Transactions on Emerging Topics in Computational Intelligence.

[11]  C. M. Reeves,et al.  Function minimization by conjugate gradients , 1964, Comput. J..

[12]  Calin-Adrian Popa Conjugate Gradient Algorithms for Complex-Valued Neural Networks , 2015, ICONIP.

[13]  M. Hestenes,et al.  Methods of conjugate gradients for solving linear systems , 1952 .

[14]  P. Wolfe Convergence Conditions for Ascent Methods. II , 1969 .

[15]  David J. Thuente,et al.  Line search algorithms with guaranteed sufficient decrease , 1994, TOMS.

[16]  Saleem A. Kassam,et al.  Channel Equalization Using Adaptive Complex Radial Basis Function Networks , 1995, IEEE J. Sel. Areas Commun..

[17]  Ahmad Reza Heravi,et al.  A New Correntropy-Based Conjugate Gradient Backpropagation Algorithm for Improving Training in Neural Networks , 2018, IEEE Transactions on Neural Networks and Learning Systems.

[18]  Satoshi Oyama,et al.  Effective neural network training with adaptive learning rate based on training loss , 2018, Neural Networks.

[19]  Wei Wu,et al.  A New Conjugate Gradient Method with Smoothing L1/2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_{1/2} $$\end{docu , 2017, Neural Processing Letters.

[20]  He Huang,et al.  Adaptive complex-valued stepsize based fast learning of complex-valued neural networks , 2020, Neural Networks.

[21]  W. Hager,et al.  A SURVEY OF NONLINEAR CONJUGATE GRADIENT METHODS , 2005 .

[22]  Francesco Piazza,et al.  On the complex backpropagation algorithm , 1992, IEEE Trans. Signal Process..

[23]  Jinde Cao,et al.  Fully complex conjugate gradient-based neural networks using Wirtinger calculus framework: Deterministic convergence and its application , 2019, Neural Networks.

[24]  Tülay Adali,et al.  Fully Complex Multi-Layer Perceptron Network for Nonlinear Signal Processing , 2002, J. VLSI Signal Process..

[25]  Danilo P. Mandic,et al.  Is a Complex-Valued Stepsize Advantageous in Complex-Valued Gradient Learning Algorithms? , 2016, IEEE Transactions on Neural Networks and Learning Systems.

[26]  Yilong Hao,et al.  The performance of the backpropagation algorithm with varying slope of the activation function , 2009 .

[27]  Danilo P. Mandic,et al.  Convergence analysis of an augmented algorithm for fully complex-valued neural networks , 2015, Neural Networks.

[28]  Jian Wang,et al.  A novel conjugate gradient method with generalized Armijo search for efficient training of feedforward neural networks , 2018, Neurocomputing.

[29]  Tohru Nitta,et al.  An Extension of the Back-Propagation Algorithm to Complex Numbers , 1997, Neural Networks.

[30]  Jacek M. Zurada,et al.  Deterministic convergence of conjugate gradient method for feedforward neural networks , 2011, Neurocomputing.

[31]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[32]  Farid U. Dowla,et al.  Backpropagation Learning for Multilayer Feed-Forward Neural Networks Using the Conjugate Gradient Method , 1991, Int. J. Neural Syst..

[33]  Chao Zhang,et al.  Convergence of Batch Split-Complex Backpropagation Algorithm for Complex-Valued Neural Networks , 2009 .