Task dimensions of user evaluations of information retrieval systems

This paper reports on the evaluation of three search engines using a variety of user-centred evaluation measures grouped into four criteria of retrieval system performance. This exploratory study of users' evaluations of search engines took as its premise that user system success indicators will derive from the retrieval task the system supports (in its objective to facilitate search). This resulted in the definition of user evaluation as a multidimensional construct which provides a framework to link evaluations to system features in defined user contexts. Our findings indicate that users' evaluations across the engines will vary, and the dimensional approach to evaluation suggests the possible impact of system features. Further analysis suggests a moderating effect on the strength of the evaluation by a characterization of the user and/or query context. The development of this approach to user evaluation may contribute towards a better understanding of system feature and contextual impact on user evaluations of retrieval systems.