Generalization bounds for ranking algorithms via almost everywhere stability

The goal of ranking is to learn a real-valued ranking function that induces a ranking or ordering over an instance space. A learning algorithm is stable if the algorithm satisfies the hypothesis that the output of the algorithm varies in a limited way in response to small changes made to the training set. This paper studies the ‘almost everywhere’ stability of ranking algorithms, notions of strong stability and weak stability for ranking algorithms are defined, and the generalization bounds of stable ranking algorithms are obtained. In particular, the relationship between strong (weak) loss stability and strong (weak) score stability is also discussed.