Differences in the Use of Search Assistance for Tasks of Varying Complexity

In this paper, we study how users interact with a search assistance tool while completing tasks of varying complexity. We designed a novel tool referred to as the search guide (SG) that displays the search trails (queries issued, results clicked, pages bookmarked) from three previous users who completed the task. We report on a laboratory study with 48 participants that investigates different factors that may influence user interaction with the SG and the effects of the SG on different outcome measures. Participants were asked to find and bookmark pages for four tasks of varying complexity and the SG was made available to half the participants. We collected log data and conducted retrospective stimulated recall interviews to learn about participants' use of the SG. Our results suggest the following trends. First, interaction with the SG was greater for more complex tasks. Second, the a priori determinability of the task (i.e., whether the task was perceived to be well-defined) helped predict whether participants gained a bookmark from the SG. Third, participants who interacted with the SG, but did not gain a bookmark, felt less system support than those who gained a bookmark and those who did not interact. Finally, a qualitative analysis of our interviews suggests differences in motivation and benefits from SG use for different levels of task complexity. Our findings extend prior research on search assistance tools and provide insights for the design of systems to help users with complex search tasks.

[1]  Pattie Maes,et al.  Footprints: history-rich tools for information foraging , 1999, CHI '99.

[2]  David Mease,et al.  Measuring improvement in user search performance resulting from optimal search tips , 2011, SIGIR '11.

[3]  Jaime Arguello,et al.  Task complexity, vertical display and user interaction in aggregated search , 2012, SIGIR '12.

[4]  Milad Shokouhi,et al.  Learning to personalize query auto-completion , 2013, SIGIR.

[5]  Bernard J. Jansen,et al.  Using the taxonomy of cognitive learning to model online searching , 2009, Inf. Process. Manag..

[6]  Iris Xie,et al.  Understanding help seeking within the context of searching digital libraries , 2009 .

[7]  Kalervo Järvelin,et al.  Task complexity affects information seeking and use , 1995 .

[8]  Jaime Arguello Predicting Search Task Difficulty , 2014, ECIR.

[9]  Ryen W. White,et al.  Exploring the use of labels to shortcut search trails , 2010, SIGIR.

[10]  D. Campbell Task Complexity: A Review and Analysis , 1988 .

[11]  Jaime Arguello,et al.  Grannies, tanning beds, tattoos and NASCAR: evaluation of search tasks with varying levels of cognitive complexity , 2012, IIiX.

[12]  Francesco Bonchi,et al.  Do you want to take notes?: identifying research missions in Yahoo! search pad , 2010, WWW '10.

[13]  Xiaojun Yuan,et al.  Building the trail best traveled: effects of domain knowledge on web search trailblazing , 2012, CHI.

[14]  Nicholas J. Belkin,et al.  A faceted approach to conceptualizing tasks in information seeking , 2008, Inf. Process. Manag..

[15]  Aniket Kittur,et al.  Sensemaking : Improving Sensemaking by Leveraging the Efforts of Previous Users , 2012 .

[16]  Ryen W. White,et al.  Assessing the scenic route: measuring the value of search trails in web logs , 2010, SIGIR.

[17]  Benjamin S. Bloom,et al.  A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives , 2000 .

[18]  W. Bruce Croft,et al.  Diversifying query suggestions based on query documents , 2014, SIGIR.

[19]  Iris Xie,et al.  Understanding help seeking within the context of searching digital libraries , 2009, J. Assoc. Inf. Sci. Technol..

[20]  Dan Roth,et al.  A Discriminative Model for Query Spelling Correction with Latent Structural SVM , 2012, EMNLP.

[21]  Ryen W. White,et al.  Studying trailfinding algorithms for enhanced web search , 2010, SIGIR.

[22]  J. P. Morgan,et al.  Design and Analysis: A Researcher's Handbook , 2005, Technometrics.

[23]  Ian Ruthven,et al.  Searcher's Assessments of Task Complexity for Web Searching , 2004, ECIR.

[24]  Stephanie Rosenbaum,et al.  Helping users to use help: improving interaction with help systems , 2004, CHI EA '04.

[25]  Pia Borlund,et al.  Experimental components for the evaluation of interactive information retrieval systems , 2000, J. Documentation.

[26]  Bernard J. Jansen,et al.  Evaluating the effectiveness of and patterns of interactions with automated searching assistance: Research Articles , 2005 .

[27]  Ryen W. White,et al.  Studying the use of popular destinations to enhance web search interaction , 2007, SIGIR.

[28]  Bernard J. Jansen,et al.  Evaluating the effectiveness of and patterns of interactions with automated searching assistance , 2005, J. Assoc. Inf. Sci. Technol..