A Fully Sparse Implementation of a Primal-Dual Interior-Point Potential Reduction Method for Semidefinite Programming

In this paper, we show a way to exploit sparsity in the problem data in a primal-dual potential reduction method for solving a class of semidefinite programs. When the problem data is sparse, the dual variable is also sparse, but the primal one is not. To avoid working with the dense primal variable, we apply Fukuda et al.'s theory of partial matrix completion and work with partial matrices instead. The other place in the algorithm where sparsity should be exploited is in the computation of the search direction, where the gradient and the Hessian-matrix product of the primal and dual barrier functions must be computed in every iteration. By using an idea from automatic differentiation in backward mode, both the gradient and the Hessian-matrix product can be computed in time proportional to the time needed to compute the barrier functions of sparse variables itself. Moreover, the high space complexity that is normally associated with the use of automatic differentiation in backward mode can be avoided in this case. In addition, we suggest a technique to efficiently compute the determinant of the positive definite matrix completion that is required to compute primal search directions. The method of obtaining one of the primal search directions that minimizes the number of the evaluations of the determinant of the positive definite completion is also proposed. We then implement the algorithm and test it on the problem of finding the maximum cut of a graph.

[1]  Samuel Burer,et al.  Semidefinite Programming in the Space of Partial Positive Semidefinite Matrices , 2003, SIAM J. Optim..

[2]  M. Hestenes,et al.  Methods of conjugate gradients for solving linear systems , 1952 .

[3]  Stephen P. Boyd,et al.  Semidefinite Programming , 1996, SIAM Rev..

[4]  H. Markowitz The Elimination form of the Inverse and its Application to Linear Programming , 1957 .

[5]  Griewank,et al.  On automatic differentiation , 1988 .

[6]  A. Neumaier,et al.  Restricted maximum likelihood estimation of covariances in sparse linear models , 1998, Genetics Selection Evolution.

[7]  D. Rose,et al.  Generalized nested dissection , 1977 .

[8]  James Renegar,et al.  A mathematical view of interior-point methods in convex optimization , 2001, MPS-SIAM series on optimization.

[9]  Farid Alizadeh,et al.  Interior Point Methods in Semidefinite Programming with Applications to Combinatorial Optimization , 1995, SIAM J. Optim..

[10]  Charles R. Johnson,et al.  Positive definite completions of partial Hermitian matrices , 1984 .

[11]  D. R. Fulkerson,et al.  Incidence matrices and interval graphs , 1965 .

[12]  E. D. Klerk,et al.  Aspects of semidefinite programming : interior point algorithms and selected applications , 2002 .

[13]  David P. Williamson,et al.  Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming , 1995, JACM.

[14]  R. Freund Review of A mathematical view of interior-point methods in convex optimization, by James Renegar, SIAM, Philadelphia, PA , 2004 .

[15]  Satissed Now Consider Improved Approximation Algorithms for Maximum Cut and Satissability Problems Using Semideenite Programming , 1997 .

[16]  X. Yi On Automatic Differentiation , 2005 .

[17]  Michael T. Heath,et al.  Scientific Computing: An Introductory Survey , 1996 .

[18]  Xiong Zhang,et al.  Solving Large-Scale Sparse Semidefinite Programs for Combinatorial Optimization , 1999, SIAM J. Optim..

[19]  W. F. Tinney,et al.  On computing certain elements of the inverse of a sparse matrix , 1975, Commun. ACM.

[20]  Kazuo Murota,et al.  Exploiting Sparsity in Semidefinite Programming via Matrix Completion I: General Framework , 2000, SIAM J. Optim..