A Deterministic Linear Program Solver in Current Matrix Multiplication Time

Interior point algorithms for solving linear programs have been studied extensively for a long time [e.g. Karmarkar 1984; Lee, Sidford FOCS'14; Cohen, Lee, Song STOC'19]. For linear programs of the form $\min_{Ax=b, x \ge 0} c^\top x$ with $n$ variables and $d$ constraints, the generic case $d = \Omega(n)$ has recently been settled by Cohen, Lee and Song [STOC'19]. Their algorithm can solve linear programs in $\tilde O(n^\omega \log(n/\delta))$ expected time, where $\delta$ is the relative accuracy. This is essentially optimal as all known linear system solvers require up to $O(n^{\omega})$ time for solving $Ax = b$. However, for the case of deterministic solvers, the best upper bound is Vaidya's 30 years old $O(n^{2.5} \log(n/\delta))$ bound [FOCS'89]. In this paper we show that one can also settle the deterministic setting by derandomizing Cohen et al.'s $\tilde{O}(n^\omega \log(n/\delta))$ time algorithm. This allows for a strict $\tilde{O}(n^\omega \log(n/\delta))$ time bound, instead of an expected one, and a simplified analysis, reducing the length of their proof of their central path method by roughly half. Derandomizing this algorithm was also an open question asked in Song's PhD Thesis. The main tool to achieve our result is a new data-structure that can maintain the solution to a linear system in subquadratic time. More accurately we are able to maintain $\sqrt{U}A^\top(AUA^\top)^{-1}A\sqrt{U}\:v$ in subquadratic time under $\ell_2$ multiplicative changes to the diagonal matrix $U$ and the vector $v$. This type of change is common for interior point algorithms. Previous algorithms [e.g. Vaidya STOC'89; Lee, Sidford FOCS'15; Cohen, Lee, Song STOC'19] required $\Omega(n^2)$ time for this task. [...]

[1]  Thatchaphol Saranurak,et al.  Dynamic Matrix Inverse: Improved Algorithms and Matching Conditional Lower Bounds , 2019, 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS).

[2]  Ken-ichi Kawarabayashi,et al.  Deterministic Edge Connectivity in Near-Linear Time , 2014, J. ACM.

[3]  Kenneth Steiglitz,et al.  Combinatorial Optimization: Algorithms and Complexity , 1981 .

[4]  François Le Gall,et al.  Improved Rectangular Matrix Multiplication using Powers of the Coppersmith-Winograd Tensor , 2017, SODA.

[5]  Kurt M. Anstreicher,et al.  Volumetric path following algorithms for linear programming , 1997, Math. Program..

[6]  L. G. H. Cijan A polynomial algorithm in linear programming , 1979 .

[7]  Piotr Sankowski,et al.  Dynamic transitive closure via dynamic matrix inverse: extended abstract , 2004, 45th Annual IEEE Symposium on Foundations of Computer Science.

[8]  François Le Gall,et al.  Powers of tensors and fast matrix multiplication , 2014, ISSAC.

[9]  Yin Tat Lee,et al.  Solving linear programs in the current matrix multiplication time , 2018, STOC.

[10]  Kurt M. Anstreicher,et al.  A New Infinity-Norm Path Following Algorithm for Linear Programming , 1995, SIAM J. Optim..

[11]  Matthias Christandl,et al.  Barriers for fast matrix multiplication from irreversibility , 2018, CCC.

[12]  Michael J. Todd,et al.  Self-Scaled Barriers and Interior-Point Methods for Convex Programming , 1997, Math. Oper. Res..

[13]  Kenneth L. Clarkson,et al.  Las Vegas algorithms for linear and integer programming when the dimension is small , 1995, JACM.

[14]  Timothy M. Chan Improved Deterministic Algorithms for Linear Programming in Low Dimensions , 2016, SODA.

[15]  Yin Tat Lee,et al.  Efficient Inverse Maintenance and Faster Algorithms for Linear Programming , 2015, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science.

[16]  Shinji Mizuno,et al.  An O(√nL)-Iteration Homogeneous and Self-Dual Linear Programming Algorithm , 1994, Math. Oper. Res..

[17]  L. Khachiyan Polynomial algorithms in linear programming , 1980 .

[18]  Josh Alman,et al.  Limits on All Known (and Some Unknown) Approaches to Matrix Multiplication , 2018, 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS).

[19]  Micha Sharir,et al.  A subexponential bound for linear programming , 1992, SCG '92.

[20]  Bernard Chazelle,et al.  A minimum spanning tree algorithm with inverse-Ackermann type complexity , 2000, JACM.

[21]  N. Megiddo Pathways to the optimal set in linear programming , 1989 .

[22]  Yurii Nesterov,et al.  Acceleration and Parallelization of the Path-Following Interior Point Method for a Linearly Constrained Convex Quadratic Problem , 1991, SIAM J. Optim..

[23]  Matthias Christandl,et al.  Barriers for rectangular matrix multiplication , 2020, Electron. Colloquium Comput. Complex..

[24]  Narendra Karmarkar,et al.  A new polynomial-time algorithm for linear programming , 1984, Comb..

[25]  Gil Kalai,et al.  A subexponential randomized simplex algorithm (extended abstract) , 1992, STOC '92.

[26]  Omer Reingold,et al.  Derandomization Beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space , 2017, 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS).

[27]  David R. Karger,et al.  Minimum cuts in near-linear time , 1998, JACM.

[28]  Yin Tat Lee,et al.  Solving Empirical Risk Minimization in the Current Matrix Multiplication Time , 2019, COLT.

[29]  James Renegar,et al.  A polynomial-time algorithm, based on Newton's method, for linear programming , 1988, Math. Program..

[30]  Josh Alman,et al.  Limits on the Universal method for matrix multiplication , 2018, CCC.

[31]  Pravin M. Vaidya,et al.  A new algorithm for minimizing convex functions over convex sets , 1996, Math. Program..

[32]  Virginia Vassilevska Williams,et al.  Multiplying matrices faster than coppersmith-winograd , 2012, STOC '12.

[33]  Virginia Vassilevska Williams Limits on All Known (and Some Unknown) Approaches to Matrix Multiplication , 2019, ISSAC.

[34]  Pravin M. Vaidya,et al.  An algorithm for linear programming which requires O(((m+n)n2+(m+n)1.5n)L) arithmetic operations , 1987, Math. Program..

[35]  Josh Alman,et al.  Further Limitations of the Known Approaches for Matrix Multiplication , 2017, ITCS.

[36]  Yin Tat Lee,et al.  Path Finding Methods for Linear Programming: Solving Linear Programs in Õ(vrank) Iterations and Faster Algorithms for Maximum Flow , 2014, 2014 IEEE 55th Annual Symposium on Foundations of Computer Science.

[37]  Monika Henzinger,et al.  Distributed edge connectivity in sublinear time , 2019, STOC.

[38]  Kurt M. Anstreicher,et al.  Linear Programming in O([n3/ln n]L) Operations , 1999, SIAM J. Optim..

[39]  Pravin M. Vaidya,et al.  Speeding-up linear programming using fast matrix multiplication , 1989, 30th Annual Symposium on Foundations of Computer Science.

[40]  Bernard Chazelle,et al.  On linear-time deterministic algorithms for optimization problems in fixed dimension , 1996, SODA '93.

[41]  Pravin M. Vaidya,et al.  A Technique for Bounding the Number of Iterations in Path Following Algorithms , 1993 .

[42]  Seth Pettie,et al.  An optimal minimum spanning tree algorithm , 2000, JACM.