Ranking in response to user queries is a central problem in information retrieval, data mining, and machine learning. In the era of "Big data", traditional effectiveness-centric ranking techniques tend to get more and more costly (requiring additional hardware and energy costs) to sustain reasonable ranking speed on large data. The mentality of combating big data by throwing in more hardware/machines will quickly become highly expensive since data is growing at an extremely fast rate oblivious to any cost concerns from us. "Learning to efficiently rank" offers a cost-effective solution to ranking on large data (e.g., billions of documents). That is, it addresses a critically important question -- whether it is possible to improve ranking effectiveness on large data without incurring (too much) additional cost?
[1]
Brian D. Davison,et al.
Learning to rank for freshness and relevance
,
2011,
SIGIR.
[2]
Jimmy J. Lin,et al.
A cascade ranking model for efficient ranked retrieval
,
2011,
SIGIR.
[3]
Lidan Wang,et al.
Learning to efficiently rank
,
2010,
SIGIR.
[4]
Tie-Yan Liu,et al.
Learning to rank for information retrieval
,
2009,
SIGIR.
[5]
Kilian Q. Weinberger,et al.
The Greedy Miser: Learning under Test-time Budgets
,
2012,
ICML.
[6]
Craig MacDonald,et al.
Efficient and effective retrieval using selective pruning
,
2013,
WSDM.
[7]
Jimmy J. Lin,et al.
Ranking under temporal constraints
,
2010,
CIKM.
[8]
Paul N. Bennett,et al.
Robust ranking models via risk-sensitive optimization
,
2012,
SIGIR '12.