Support vector machine interpretation

Decisions taken by support vector machines (SVM) are hard to interpret from a human perspective. We take advantage of a compact SVM solution previously developed, known as growing support vector classifier (GSVC), to provide interpretation to SVM decisions in terms of input space segmentation in Voronoi sections (determined by the prototypes extracted during the GSVC training method) plus rules built as a linear combination of input variables. We show by means of experiments on public domain datasets that the resulting interpretable machines have high fidelity, and an accuracy comparable to the SVM.