The CLEF 2001 Interactive Track
暂无分享,去创建一个
The problem of finding documents written in a language that the searcher cannot read is perhaps the most challenging appli- cation of cross-language information retrieval technology. In interactive applications, that task involves at least two steps: (1) the machine lo- cates promising documents in a collection that is larger than the searcher could scan, and (2) the searcher recognizes documents relevant to their intended use from among those nominated by the machine. The goal of the 2001 Cross-Language Evaluation Forum's experimental interactive track was to explore the ability of present technology to support inter- active relevance assessment. This paper describes the shared experiment design used at all three participating sites, summarizes preliminary re- sults from the evaluation, and concludes with observations on lessons learned that can inform the design of subsequent evaluation campaigns.
[1] C. J. van Rijsbergen,et al. Information Retrieval , 1979, Encyclopedia of GIS.
[2] John White,et al. Predicting what MT is good for: user judgments and task performance , 1998, AMTA.
[3] Patrick Brézillon,et al. Lecture Notes in Artificial Intelligence , 1999 .
[4] Douglas W. Oard. Evaluating Interactive Cross-Language Information Retrieval: Document Selection , 2000, CLEF.