Linear Convergence of Proximal-Gradient Methods under the Polyak-Łojasiewicz Condition

In 1963, Polyak proposed a simple condition that is sufficient to show that gradient descent has a global linear convergence rate. This condition is a special case of the Łojasiewicz inequality proposed in the same year, and it does not require strong-convexity (or even convexity). In this work, we show that this much-older Polyak-Łojasiewicz (PL) inequality is actually weaker than the four main conditions that have been explored to show linear convergence rates without strong-convexity over the last 25 years. We also use the PL inequality to give new analyses of randomized and greedy coordinate descent methods, as well as stochastic gradient methods with decreasing or constant step sizes. We then consider a natural generalization of the inequality that applies to proximal-gradient methods for non-smooth optimization, and show this means other conditions that have been proposed to achieve linear convergence for `1-regularized least squares are unnecessary. Along the way, we give new convergence results for a wide variety of problems in machine learning: least squares, logistic regression, boosting, L1-regularization, support vector machines, stochastic dual coordinate ascent, and stochastic variance-reduced gradient methods.