Interactive Information Retrieval: An Evaluation Perspective
暂无分享,去创建一个
This presentation addresses methodological issues of interactive information retrieval (IIR) evaluation in terms of what it entails to study users' use and interaction with IR systems, as well as their satisfaction with retrieved information. In particular, the presentation focuses on test design, and it takes a look into the toolbox of IIR test design with reference to data collection methods and test procedure. It calls for careful and well-planned studies to qualify the knowledgebase generated as a result of the conducted IIR studies. The presentation further reflects on the need for an updated theoretical framework to describe partly the various types of IIR, and partly how IIR nowadays often is carried out in a seamless task switching IT environment on various platforms, including via apps. This type of environment furthermore calls for new methodologies to study the IIR behaviour in the habitat of the users to ensure a complete and realistic picture to enhance our understanding of IIR. The presentation also reflects on whether a re-thinking of the concept on an information need is necessary. One may ask whether it still makes sense to talk about types of information needs. Or should we rather study IIR from the perspective of search dedication and task load in order to also include everyday life information seeking? With this presentation, the IIR community is invited to an exchange of ideas and is encouraged to engage in collaborations with the solving of these (and other) issues to our joint advantage.