Appendix for Paper “ Asynchronous Doubly Stochastic Group Regularized Learning ”

In this section, we follow the analysis of (Liu and Wright, 2015) and prove the convergence rate of AsyDSPG+ (Theorems 5 and 6). Specifically, AsyDSPG+ achieves a linear convergence rate when the function f is with the optimal strong convexity property, and a sublinear rate when f is with the general convexity (Theorems 5). In addition, AsyDSPG+ alsos achieves a sublinear rate when f is with the non-convexity (Theorems 6). Before providing the theoretical analysis, we give the definitions of x̂t,t′+1, x s t+1, ∇̃F (xt ) and the explanation of xt used in the analysis as follows.