Ranking is an essential component for a number of tasks, such as information retrieval and collaborative filtering. It is often the case that the underlying task attempts to maximize some evaluation metric, such as mean average precision, over rankings. Most past work on learning how to rank has focused on likelihoodor margin-based approaches. In this work we explore directly maximizing rank-based metrics, which are a family of metrics that only depend on the order of ranked items. This allows us to maximize different metrics for the same training data. We show how the parameter space of linear scoring functions can be reduced to a multinomial manifold. Parameter estimation is accomplished by optimizing the evaluation metric over the manifold. Results from ad hoc information retrieval are given that show our model yields significant improvements in effectiveness over other approaches.
[1]
A. Agresti,et al.
Analysis of Ordinal Categorical Data.
,
1985
.
[2]
Ralf Herbrich,et al.
Large margin rank boundaries for ordinal regression
,
2000
.
[3]
Alexander J. Smola,et al.
Advances in Large Margin Classifiers
,
2000
.
[4]
Koby Crammer,et al.
Pranking with Ranking
,
2001,
NIPS.
[5]
Amnon Shashua,et al.
Ranking with Large Margin Principle: Two Approaches
,
2002,
NIPS.
[6]
Thorsten Joachims,et al.
Optimizing search engines using clickthrough data
,
2002,
KDD.
[7]
W. Bruce Croft,et al.
Language Modeling for Information Retrieval
,
2010,
The Springer International Series on Information Retrieval.
[8]
Ramesh Nallapati,et al.
Discriminative models for information retrieval
,
2004,
SIGIR '04.