Robust Mahalanobis Metric Learning via Geometric Approximation Algorithms

Learning Mahalanobis metric spaces is an important problem that has found numerous applications. Several algorithms have been designed for this problem, including Information Theoretic Metric Learning (ITML) [Davis et al. 2007] and Large Margin Nearest Neighbor (LMNN) classification [Weinberger and Saul 2009]. We study the problem of learning a Mahalanobis metric space in the presence of adversarial label noise. To that end, we consider a formulation of Mahalanobis metric learning as an optimization problem, where the objective is to minimize the number of violated similarity/dissimilarity constraints. We show that for any fixed ambient dimension, there exists a fully polynomial-time approximation scheme (FPTAS) with nearly-linear running time. This result is obtained using tools from the theory of linear programming in low dimensions. As a consequence, we obtain a fully-parallelizable algorithm that recovers a nearly-optimal metric space, even when a small fraction of the labels is corrupted adversarially. We also discuss improvements of the algorithm in practice, and present experimental results on real-world, synthetic, and poisoned data sets.