Penalty Method for Constrained Distributed Quaternion-Variable Optimization

This article studies the constrained optimization problems in the quaternion regime via a distributed fashion. We begin with presenting some differences for the generalized gradient between the real and quaternion domains. Then, an algorithm for the considered optimization problem is given, by which the desired optimization problem is transformed into an unconstrained setup. Using the tools from the Lyapunov-based technique and nonsmooth analysis, the convergence property associated with the devised algorithm is further guaranteed. In addition, the designed algorithm has the potential for solving distributed neurodynamic optimization problems as a recurrent neural network. Finally, a numerical example involving machine learning is given to illustrate the efficiency of the obtained results.