How do users evaluate individual documents? An analysis of dimensions of evaluation activities

Introduction. Evaluation plays an important role in users' information searching and retrieving processes. While previous research mainly focuses on applied criteria, less research is on other dimensions of evaluation. This study explores the dimensions of evaluation activities including criteria applied, elements examined, activity engaged in and time spent in evaluating individual documents. Method. Conducting their own tasks, thirty-one participants with different demographic backgrounds, participated in the study. Multiple data collection methods were used: pre-questionnaires, interaction diaries, think-aloud protocols, transaction logs and post-questionnaires. Analysis. Types of evaluation criteria, elements, evaluation activities and their associated pre- and post-activities were analysed by open-coding. Descriptive analysis was applied to analyse the most and leasdt time spent on individual documents with associated factors. Results. The findings of this study reveal that evaluation activities are complicated and dynamic. Eighteen types of evaluation criteria, seven types of evaluation elements, six types of evaluation activities and their associated pre- and post-activities were identified. In addition, factors affecting time spent in evaluating individual documents were analysed. Conclusions. The authors offer suggestions for information retrieval system design to support effective evaluation.

[1]  Peiling Wang,et al.  A Cognitive Model of Document Use During a Research Project. Study II. Decisions at the Reading and Citing Stages , 1999, J. Am. Soc. Inf. Sci..

[2]  Carol L. Barry User-defined relevance criteria: an exploratory study , 1994 .

[3]  S. Parker Content Analysis for the Social Sciences and Humanities , 1970 .

[4]  Chad Galloway,et al.  Relevance judging, evaluation, and decision making in virtual libraries: A descriptive study , 2001, J. Assoc. Inf. Sci. Technol..

[5]  David Bade Relevance ranking is not relevance ranking or, when the user is not the user, the search results are not search results , 2007, Online Inf. Rev..

[6]  Michael B. Eisenberg,et al.  A re-examination of relevance: toward a dynamic, situational definition , 1990, Inf. Process. Manag..

[7]  A. Strauss,et al.  Basics of Qualitative Research , 1992 .

[8]  T. Park The Nature of Relevance in Information Retrieval: An Empirical Study , 1993, The Library Quarterly.

[9]  Pertti Vakkari,et al.  Changes in relevance criteria and problem stages in task performance , 2000, J. Documentation.

[10]  Soo Young Rieh Judgement of information quality and cognitive authority in the Web , 2002 .

[11]  Peiling Wang,et al.  A cognitive model of document use during a research project. Study I. document selection , 1998 .

[12]  Daqing He,et al.  Combining evidence for automatic Web session identification , 2002, Inf. Process. Manag..

[13]  Miriam J. Metzger,et al.  College student Web use, perceptions of information credibility, and verification behavior , 2003, Comput. Educ..

[14]  Joemon M. Jose,et al.  How users assess Web pages for information seeking , 2005, J. Assoc. Inf. Sci. Technol..

[15]  Soo Young Rieh Judgment of information quality and cognitive authority in the Web , 2002, J. Assoc. Inf. Sci. Technol..

[16]  Bernard J. Jansen,et al.  Brand and its effect on user perception of search engine performance , 2009, J. Assoc. Inf. Sci. Technol..

[17]  Carol L. Barry Document Representations and Clues to Document Relevance , 1998, J. Am. Soc. Inf. Sci..

[18]  Michael B. Eisenberg,et al.  On Defining Relevance , 1991 .

[19]  Nicholas J. Belkin,et al.  Characteristics of Texts Affecting Relevance Judgments , 1993 .

[20]  Diane H. Sonnenwald,et al.  User perspectives on relevance criteria: A comparison among relevant, partially relevant, and not-relevant judgments , 2002, J. Assoc. Inf. Sci. Technol..

[21]  Howard Greisdorf,et al.  Relevance thresholds: a multi-stage predictive model of how users evaluate information , 2003, Inf. Process. Manag..

[22]  Thorsten Joachims,et al.  Accurately Interpreting Clickthrough Data as Implicit Feedback , 2017 .

[23]  Pia Borlund,et al.  The concept of relevance in IR , 2003, J. Assoc. Inf. Sci. Technol..

[24]  Jarkko Kari,et al.  User-defined relevance criteria in web searching , 2006, J. Documentation.

[25]  D. R. Danielson,et al.  How do users evaluate the credibility of Web sites?: a study with over 2,500 participants , 2003, DUX '03.

[26]  Panos Balatsoukas,et al.  An evaluation framework of user interaction with metadata surrogates , 2009, J. Inf. Sci..

[27]  Ed H. Chi,et al.  Using information scent to model user information needs and actions and the Web , 2001, CHI.

[28]  Linda Schamber,et al.  Users' Criteria for Evaluation in a Multimedia Environment. , 1991 .

[29]  Nicholas J. Belkin,et al.  Features of documents relevant to task- and fact- oriented questions , 2002, CIKM '02.

[30]  Amanda Spink,et al.  How are we searching the World Wide Web? A comparison of nine search engine transaction logs , 2006, Inf. Process. Manag..

[31]  Carol L. Barry,et al.  Users' Criteria for Relevance Evaluation: A Cross-situational Comparison , 1998, Inf. Process. Manag..