Training support vector machines: a quantum-computing perspective

Recent advances in characterizing the generalization ability of support vector machines (SVMs) exploit refined concepts, such as Rademacher estimates of model complexity and nonlinear criteria for weighting empirical errors. Those methods improve the SVM representation ability and tighten generalization bounds. On the other hand, quadratic-programming algorithms are no longer applicable, hence the SVM-training process cannot benefit from the notable efficiency featured by those specialized techniques. The paper considers the possibility of using quantum computing to solve the resulting problem of effective optimization, especially in the case of digital SV implementations. The behavioral aspects of conventional and enhanced SVMs are compared, supported by experiments in both a synthetic and a real-world problem. Likewise, the related differences between quadratic-programming and quantum-based optimization techniques are analyzed.