Learning-to-rank methods automatically generate ranking functions
which can be used for ordering unknown resources according to their
relevance for a specific search query. The training data to construct
such a model consists of features describing a document-query-pair
as well as relevance scores indicating how important the document
is for the query. In general, these relevance scores are derived by asking
experts to manually assess search results or by exploiting user
search behaviour such as click data. The human evaluation of ranking
results gives explicit relevance scores, but it is expensive to obtain.
Clickdata can be logged from the user interaction with a search engine,
but the feedback is noisy. In this paper, we want to explore a
novel source of implicit feedback for web search: tagging data. Creating
relevance feedback from tagging data leads to a further source
of implicit relevance feedback which helps improve the reliability of
automatically generated relevance scores and therefore the quality of
learning-to-rank models.
[1]
Craig MacDonald,et al.
Usefulness of quality click-through data for training
,
2009,
WSCD '09.
[2]
Xiaojie Yuan,et al.
Are click-through data adequate for learning web search rankings?
,
2008,
CIKM '08.
[3]
Andreas Hotho,et al.
Information Retrieval in Folksonomies: Search and Ranking
,
2006,
ESWC.
[4]
Xiaodong Chen,et al.
An Overview of Learning to Rank for Information Retrieval
,
2009,
2009 WRI World Congress on Computer Science and Information Engineering.
[5]
Thorsten Joachims,et al.
Optimizing search engines using clickthrough data
,
2002,
KDD.
[6]
Rajeev Motwani,et al.
The PageRank Citation Ranking : Bringing Order to the Web
,
1999,
WWW 1999.
[7]
C. Bauckhage,et al.
Analyzing Social Bookmarking Systems : A del . icio . us Cookbook
,
2008
.