We address the problem of unsupervised ensemble ranking in this paper. Traditional approaches either combine multiple ranking criteria into a unified representation to obtain an overall ranking score or to utilize certain rank fusion or aggregation techniques to combine the ranking results. Beyond the aforementioned combine-then-rank and rank-then-combine approaches, we propose a novel rank-learn-combine ranking framework, called Interactive Ranking (iRANK), which allows two base rankers to "teach" each other before combination during the ranking process by providing their own ranking results as feedback to the others so as to boost the ranking performance. This mutual ranking refinement process continues until the two base rankers cannot learn from each other any more. The overall performance is improved by the enhancement of the base rankers through the mutual learning mechanism. We apply this framework to the sentence ranking problem in query-focused summarization and evaluate its effectiveness on the DUC 2005 data set. The results are encouraging with consistent and promising improvements.
[1]
Jeremy Pickens,et al.
Ranked feature fusion models for ad hoc retrieval
,
2008,
CIKM '08.
[2]
Dragomir R. Radev,et al.
LexRank: Graph-based Lexical Centrality as Salience in Text Summarization
,
2004,
J. Artif. Intell. Res..
[3]
Avrim Blum,et al.
The Bottleneck
,
2021,
Monopsony Capitalism.
[4]
Moni Naor,et al.
Rank aggregation methods for the Web
,
2001,
WWW '01.
[5]
Bernhard Schölkopf,et al.
Learning with Local and Global Consistency
,
2003,
NIPS.
[6]
Dragomir R. Radev,et al.
LexRank: Graph-based Centrality as Salience in Text Summarization
,
2004
.
[7]
Eduard H. Hovy,et al.
Automatic Evaluation of Summaries Using N-gram Co-occurrence Statistics
,
2003,
NAACL.