Epoch gradient descent for smoothed hinge-loss linear SVMs

A gradient descent method for strongly convex problems with Lipschitz continuous gradients requires only O(logq ε) iterations to obtain an ε-accurate solution (q is a constant in (0; 1)). Support Vector Machines (SVMs) penalized with the popular hinge-loss are strongly convex but they do not have Lipschitz continuous gradient. We find SVMs with strong-convexity and Lipschitz continuous gradient using Nesterov's smooth approximation technique [1]. The simple gradient method applied on the smoothed SVM converges fast but the obtained solution is not the exact maximum margin separating hyperplane. To obtain an exact solution, as well as a fast convergence, we propose a hybrid approach, epoch gradient descent.