Differentiable Ranks and Sorting using Optimal Transport

Sorting an array is a fundamental routine in machine learning, one that is used to compute rank-based statistics, cumulative distribution functions (CDFs), quantiles, or to select closest neighbors and labels. The sorting function is however piece-wise constant (the sorting permutation of a vector does not change if the entries of that vector are infinitesimally perturbed) and therefore has no gradient information to back-propagate. We propose a framework to sort elements that is algorithmically differentiable. We leverage the fact that sorting can be seen as a particular instance of the optimal transport (OT) problem on $\mathbb{R}$, from input values to a predefined array of sorted values (e.g. $1,2,\dots,n$ if the input array has $n$ elements). Building upon this link , we propose generalized CDFs and quantile operators by varying the size and weights of the target presorted array. Because this amounts to using the so-called Kantorovich formulation of OT, we call these quantities K-sorts, K-CDFs and K-quantiles. We recover differentiable algorithms by adding to the OT problem an entropic regularization, and approximate it using a few Sinkhorn iterations. We call these operators S-sorts, S-CDFs and S-quantiles, and use them in various learning settings: we benchmark them against the recently proposed neuralsort [Grover et al. 2019], propose applications to quantile regression and introduce differentiable formulations of the top-k accuracy that deliver state-of-the art performance.

[1]  Silvia Chiappa,et al.  Wasserstein Fair Classification , 2019, UAI.

[2]  Yaniv Romano,et al.  Conformalized Quantile Regression , 2019, NeurIPS.

[3]  S. Ermon,et al.  Stochastic Optimization of Sorting Networks via Continuous Relaxations , 2019, ICLR.

[4]  Gabriel Peyré,et al.  Computational Optimal Transport , 2018, Found. Trends Mach. Learn..

[5]  Andrew Zisserman,et al.  Smooth Loss Functions for Deep Top-k Classification , 2018, ICLR.

[6]  Scott W. Linderman,et al.  Learning Latent Permutations with Gumbel-Sinkhorn Networks , 2018, ICLR.

[7]  Matthieu Lerasle,et al.  ROBUST MACHINE LEARNING BY MEDIAN-OF-MEANS: THEORY AND PRACTICE , 2019 .

[8]  Jean-Philippe Vert,et al.  Supervised Quantile Normalisation , 2017, ArXiv.

[9]  G. Lugosi,et al.  Regularization, sparse recovery, and median-of-means tournaments , 2017, Bernoulli.

[10]  Bernhard Schmitzer,et al.  Stabilized Sparse Scaling Algorithms for Entropy Regularized Transport Problems , 2016, SIAM J. Sci. Comput..

[11]  Nicolas Courty,et al.  Wasserstein discriminant analysis , 2016, Machine Learning.

[12]  Tommi S. Jaakkola,et al.  Learning Population-Level Diffusions with Generative RNNs , 2016, ICML.

[13]  Yang Zou,et al.  Sliced Wasserstein Kernels for Probability Distributions , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  F. Santambrogio Optimal Transport for Applied Mathematicians: Calculus of Variations, PDEs, and Modeling , 2015 .

[15]  Carlos Eduardo Scheidegger,et al.  Certifying and Removing Disparate Impact , 2014, KDD.

[16]  Yann Brenier,et al.  Rearrangement, convection, convexity and entropy , 2013, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.

[17]  Marco Cuturi,et al.  Sinkhorn Distances: Lightspeed Computation of Optimal Transport , 2013, NIPS.

[18]  Tian Xia,et al.  Direct 0-1 Loss Minimization and Margin Maximization with Boosting , 2013, NIPS.

[19]  Arnaud Doucet,et al.  Fast Computation of Wasserstein Barycenters , 2013, ICML.

[20]  Scott Sanner,et al.  Algorithms for Direct 0-1 Loss Optimization in Binary Classification , 2013, ICML.

[21]  Stephen P. Boyd,et al.  Accuracy at the Top , 2012, NIPS.

[22]  Ryan P. Adams,et al.  Ranking via Sinkhorn Propagation , 2011, ArXiv.

[23]  Julien Rabin,et al.  Wasserstein Barycenter and Its Application to Texture Mixing , 2011, SSVM.

[24]  Julie Delon,et al.  Local Matching Indicators for Transport Problems with Concave Costs , 2011, SIAM J. Discret. Math..

[25]  Tao Qin,et al.  A general approximation framework for direct optimization of information retrieval measures , 2010, Information Retrieval.

[26]  Qiang Wu,et al.  Learning to Rank Using an Ensemble of Lambda-Gradient Models , 2010, Yahoo! Learning to Rank Challenge.

[27]  A. Galichon,et al.  Matching with Trade-Offs: Revealed Preferences Over Competing Characteristics , 2009, 2102.12811.

[28]  Quoc V. Le,et al.  Learning to Rank with Nonsmooth Cost Functions , 2006, NIPS.

[29]  Kilian Q. Weinberger,et al.  Distance Metric Learning for Large Margin Nearest Neighbor Classification , 2005, NIPS.

[30]  Jaana Kekäläinen,et al.  Cumulated gain-based evaluation of IR techniques , 2002, TOIS.

[31]  Robert E. Tarjan,et al.  Dynamic trees as search trees via euler tours, applied to the network simplex algorithm , 1997, Math. Program..

[32]  Alan L. Yuille,et al.  The invisible hand algorithm: Solving the assignment problem with statistical physics , 1994, Neural Networks.

[33]  R. Koenker,et al.  An interior point algorithm for nonlinear quantile regression , 1996 .

[34]  J. Lorenz,et al.  On the scaling of multidimensional matrices , 1989 .

[35]  P. Rousseeuw Least Median of Squares Regression , 1984 .

[36]  I. Barrodale,et al.  An Improved Algorithm for Discrete $l_1 $ Linear Approximation , 1973 .

[37]  Gabriel Peyré,et al.  Wasserstein Barycentric Coordinates: Histogram Regression Using Optimal Transport , 2021 .

[38]  Filippo Santambrogio,et al.  Optimal Transport for Applied Mathematicians , 2015 .

[39]  Julien Rabin,et al.  Sliced and Radon Wasserstein Barycenters of Measures , 2014, Journal of Mathematical Imaging and Vision.

[40]  John N. Tsitsiklis,et al.  Introduction to linear optimization , 1997, Athena scientific optimization and computation series.

[41]  A Wilson,et al.  Use of entropy maximizing models in theory of trip distribution, mode split and route split , 1969 .