Random Block-Coordinate Gradient Projection Algorithms

In this paper, we study gradient projection algorithms based on random partial updates of decision variables. These algorithms generalize random coordinate descent methods. We analyze these algorithms with and without assuming strong convexity of the objective functions. We also present an accelerated version of the algorithm based on Nesterov's two-step gradient method [1]. In each case, we prove convergence and provide a bound on the rate of convergence. We see that the randomized algorithms exhibit similar rates of convergence as their full gradient based deterministic counterparts.