Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappings

This paper considers a networked system with a finite number of users and supposes that each user tries to minimize its own private objective function over its own private constraint set. It is assumed that each user’s constraint set can be expressed as a fixed point set of a certain quasi-nonexpansive mapping. This enables us to consider the case in which the projection onto the constraint set cannot be computed efficiently. This paper proposes two methods for solving the problem of minimizing the sum of their nondifferentiable, convex objective functions over the intersection of their fixed point sets of quasi-nonexpansive mappings in a real Hilbert space. One method is a parallel subgradient method that can be implemented under the assumption that each user can communicate with other users. The other is an incremental subgradient method that can be implemented under the assumption that each user can communicate with its neighbors. Investigation of the two methods’ convergence properties for a constant step size reveals that, with a small constant step size, they approximate a solution to the problem. Consideration of the case in which the step-size sequence is diminishing demonstrates that the sequence generated by each of the two methods strongly converges to the solution to the problem under certain assumptions. Convergence rate analysis of the two methods under certain situations is provided to illustrate the two methods’ efficiency. This paper also discusses nonsmooth convex optimization over sublevel sets of convex functions and provides numerical comparisons that demonstrate the effectiveness of the proposed methods.

[1]  D. Bertsekas,et al.  Incremental Constraint Projection-Proximal Methods for Nonsmooth Convex Optimization , 2013 .

[2]  Krzysztof C. Kiwiel,et al.  Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization , 2003, SIAM J. Optim..

[3]  E. Zeidler Nonlinear functional analysis and its applications , 1988 .

[4]  IidukaHideaki Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappings , 2016 .

[5]  R. Rockafellar Convex Analysis: (pms-28) , 1970 .

[6]  丸山 徹 Convex Analysisの二,三の進展について , 1977 .

[7]  Alvaro R. De Pierro,et al.  Incremental Subgradients for Constrained Convex Optimization: A Unified Framework and New Methods , 2009, SIAM J. Optim..

[8]  Hideaki Iiduka,et al.  Acceleration method for convex optimization over the fixed point set of a nonexpansive mapping , 2015, Math. Program..

[9]  Patrick L. Combettes,et al.  A block-iterative surrogate constraint splitting method for quadratic signal recovery , 2003, IEEE Trans. Signal Process..

[10]  J.-C. Pesquet,et al.  A Douglas–Rachford Splitting Approach to Nonsmooth Convex Variational Signal Recovery , 2007, IEEE Journal of Selected Topics in Signal Processing.

[11]  W. A. Kirk,et al.  Topics in Metric Fixed Point Theory , 1990 .

[12]  Kazuhiro Hishinuma,et al.  Acceleration Method Combining Broadcast and Incremental Distributed Optimization Algorithms , 2014, SIAM J. Optim..

[13]  P. L. Combettes Iterative construction of the resolvent of a sum of maximal monotone operators , 2009 .

[14]  N. Lloyd TOPICS IN METRIC FIXED POINT THEORY (Cambridge Studies in Advanced Mathematics 28) , 1992 .

[15]  Heinz H. Bauschke,et al.  A Weak-to-Strong Convergence Principle for Fejé-Monotone Methods in Hilbert Spaces , 2001, Math. Oper. Res..

[16]  Mikael Johansson,et al.  A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems , 2009, SIAM J. Optim..

[17]  Z. Opial Weak convergence of the sequence of successive approximations for nonexpansive mappings , 1967 .

[18]  Heinz H. Bauschke,et al.  A projection method for approximating fixed points of quasi nonexpansive mappings without the usual demiclosedness condition , 2012, 1211.1639.

[19]  E. Zeidler Nonlinear Functional Analysis and Its Applications: II/ A: Linear Monotone Operators , 1989 .

[20]  Y. Censor,et al.  Parallel Optimization: Theory, Algorithms, and Applications , 1997 .

[21]  Angelia Nedic,et al.  Distributed Random Projection Algorithm for Convex Optimization , 2012, IEEE Journal of Selected Topics in Signal Processing.

[22]  V. V. Vasin,et al.  Ill-posed problems with a priori information , 1995 .

[23]  M. Solodov,et al.  Error Stability Properties of Generalized Gradient-Type Algorithms , 1998 .

[24]  Angelia Nedic,et al.  Random algorithms for convex minimization problems , 2011, Math. Program..

[25]  Isao Yamada,et al.  Minimizing the Moreau Envelope of Nonsmooth Convex Functions over the Fixed Point Set of Certain Quasi-Nonexpansive Mappings , 2011, Fixed-Point Algorithms for Inverse Problems in Science and Engineering.

[26]  P. L. Combettes,et al.  A proximal decomposition method for solving convex variational inverse problems , 2008, 0807.2617.

[27]  Asuman Ozdaglar,et al.  Cooperative distributed multi-agent optimization , 2010, Convex Optimization in Signal Processing and Communications.

[28]  I. Yamada The Hybrid Steepest Descent Method for the Variational Inequality Problem over the Intersection of Fixed Point Sets of Nonexpansive Mappings , 2001 .

[29]  Dimitri P. Bertsekas,et al.  On the Douglas—Rachford splitting method and the proximal point algorithm for maximal monotone operators , 1992, Math. Program..

[30]  Alfred O. Hero,et al.  A Convergent Incremental Gradient Method with a Constant Step Size , 2007, SIAM J. Optim..

[31]  Hideaki Iiduka,et al.  Parallel computing subgradient method for nonsmooth convex optimization over the intersection of fixed point sets of nonexpansive mappings , 2015 .

[32]  Hideaki Iiduka,et al.  Fixed Point Optimization Algorithms for Distributed Optimization in Networked Systems , 2013, SIAM J. Optim..

[33]  Isao Yamada,et al.  A Use of Conjugate Gradient Direction for the Convex Optimization Problem over the Fixed Point Set of a Nonexpansive Mapping , 2008, SIAM J. Optim..

[34]  J. Pesquet,et al.  A Parallel Inertial Proximal Optimization Method , 2012 .

[35]  J. Pesquet,et al.  A Class of Randomized Primal-Dual Algorithms for Distributed Optimization , 2014, 1406.6404.

[36]  Heinz H. Bauschke,et al.  Convex Analysis and Monotone Operator Theory in Hilbert Spaces , 2011, CMS Books in Mathematics.

[37]  Andrzej Stachurski,et al.  Parallel Optimization: Theory, Algorithms and Applications , 2000, Parallel Distributed Comput. Pract..

[38]  Asuman E. Ozdaglar,et al.  Distributed multi-agent optimization with state-dependent communication , 2010, Math. Program..

[39]  Asuman E. Ozdaglar,et al.  Distributed Subgradient Methods for Multi-Agent Optimization , 2009, IEEE Transactions on Automatic Control.

[40]  Dimitri P. Bertsekas,et al.  Incremental Subgradient Methods for Nondifferentiable Optimization , 2001, SIAM J. Optim..

[41]  John N. Tsitsiklis,et al.  On distributed averaging algorithms and quantization effects , 2007, 2008 47th IEEE Conference on Decision and Control.

[42]  Patrick L. Combettes,et al.  Proximal Splitting Methods in Signal Processing , 2009, Fixed-Point Algorithms for Inverse Problems in Science and Engineering.

[43]  Paul-Emile Maingé,et al.  The viscosity approximation process for quasi-nonexpansive mappings in Hilbert spaces , 2010, Comput. Math. Appl..

[44]  A. Banerjee Convex Analysis and Optimization , 2006 .

[45]  P. Lions,et al.  Splitting Algorithms for the Sum of Two Nonlinear Operators , 1979 .