Information Retrieval (IR) aims at solving a ranking problem: given a query $q$ and a corpus $C$, the documents of $C$ should be ranked such that the documents relevant to $q$ appear above the others. This task is generally performed by ranking the documents $d \in C$ according to their similarity with respect to $q$, $sim (q,d)$. The identification of an effective function $a,b \to sim(a,b)$ could be performed using a large set of queries with their corresponding relevance assessments. However, such data are especially expensive to label, thus, as an alternative, we propose to rely on hyperlink data which convey analogous semantic relationships. We then empirically show that a measure $sim$ inferred from hyperlinked documents can actually outperform the state-of-the-art {\em Okapi} approach, when applied over a non-hyperlinked retrieval corpus.
[1]
Thorsten Joachims,et al.
Learning a Distance Metric from Relative Comparisons
,
2003,
NIPS.
[2]
Gregory N. Hullender,et al.
Learning to rank using gradient descent
,
2005,
ICML.
[3]
Thorsten Joachims,et al.
Optimizing search engines using clickthrough data
,
2002,
KDD.
[4]
Samy Bengio,et al.
Links between perceptrons, MLPs and SVMs
,
2004,
ICML.
[5]
Brian D. Davison.
Topical locality in the Web
,
2000,
SIGIR '00.
[6]
Stephen E. Robertson,et al.
Okapi at TREC-3
,
1994,
TREC.
[7]
Stephen E. Robertson,et al.
GatfordCentre for Interactive Systems ResearchDepartment of Information
,
1996
.