Assigned tasks are not the same as self-chosen Web search tasks

Short assigned question-answering style tasks are often used as a probe to understand how users do search. While such assigned tasks are simple to test and are effective at eliciting the particulars of a given search capability, they are not the same as naturalistic searches. We studied the quantitative differences between assigned tasks and self-chosen "own" tasks finding that users behave differently when doing their own tasks, staying longer on the task, but making fewer queries and different kinds of queries overall. This finding implies that user's own tasks should be used when testing user behavior in addition to assigned tasks, which remain useful for feature testing in lab settings

[1]  Henk Sol,et al.  Proceedings of the 54th Hawaii International Conference on System Sciences , 1997, HICSS 2015.

[2]  Gobinda G. Chowdhury,et al.  TREC: Experiment and Evaluation in Information Retrieval , 2007 .

[3]  Ingrid Hsieh-Yee,et al.  Effects of Search Experience and Subject Knowledge on the Search Tactics of Novice and Experienced Searchers. , 1993 .

[4]  Ellen M Voorhees Question answering in TREC , 2001, CIKM '01.

[5]  Thorsten Joachims,et al.  Eye-tracking analysis of user behavior in WWW search , 2004, SIGIR '04.

[6]  Malcolm Slaney,et al.  Being Literate with Large Document Collections: Observational Studies and Cost Structure Tradeoffs , 2006, Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS'06).

[7]  Andrew Large,et al.  Users' perceptions of the Web as revealed by transaction log analysis , 2001, Online Inf. Rev..

[8]  Amanda Spink,et al.  How are we searching the World Wide Web? A comparison of nine search engine transaction logs , 2006, Inf. Process. Manag..

[9]  Ingrid Hsieh-Yee,et al.  Effects of Search Experience and Subject Knowledge on the Search Tactics of Novice and Experienced Searchers , 1993, J. Am. Soc. Inf. Sci..

[10]  David Beymer,et al.  Wide vs. Narrow Paragraphs: An Eye Tracking Analysis , 2005, INTERACT.

[11]  Amanda Spink,et al.  Multitasking Web search on Alta Vista , 2004, International Conference on Information Technology: Coding and Computing, 2004. Proceedings. ITCC 2004..

[12]  Mike Kuniavsky,et al.  Observing the User Experience: A Practitioner's Guide to User Research (Morgan Kaufmann Series in Interactive Technologies) (The Morgan Kaufmann Series in Interactive Technologies) , 2003 .

[13]  Nicholas J. Belkin,et al.  Ask for Information Retrieval: Part I. Background and Theory , 1997, J. Documentation.

[14]  Mark Levene,et al.  Associating search and navigation behavior through log analysis , 2005, J. Assoc. Inf. Sci. Technol..

[15]  Vanessa Murdock Ellen Voorhees and Donna Harman (eds): TREC Experiment and Evaluation in Information Retrieval , 2008, Information Retrieval.

[16]  José Luis Vicedo González,et al.  TREC: Experiment and evaluation in information retrieval , 2007, J. Assoc. Inf. Sci. Technol..

[17]  Peter Ingwersen,et al.  The development of a method for the evaluation of interactive information retrieval systems , 1997, J. Documentation.

[18]  Mark D. Dunlop,et al.  New IR - New Evaluation: the impact of interaction and multimedia on information retrieval and its evaluation , 1997, New Rev. Hypermedia Multim..

[19]  Bernard J. Jansen The Wrapper : An Open Source Application for Logging User – System Interactions during Searching Studies , 2006 .