What Users Do: The Eyes Have It

Search engine result pages – the ten blue links – are a staple of document retrieval services. The usual presumption is that users read these one-by-one from the top, making judgments about the usefulness of documents based on the snippets presented, accessing the underlying document when a snippet seems attractive, and then moving on to the next snippet. In this paper we re-examine this assumption, and present the results of a user experiment in which gaze-tracking is combined with click analysis. We conclude that in very general terms, users do indeed read from the top, but that at a detailed level there are complex behaviors evident, suggesting that a more sophisticated model of user interaction might be appropriate. In particular, we argue that users retain a number of snippets in an “active band” that shifts down the result page, and that reading and clicking activity tends to takes place within the band in a manner that is not strictly sequential.

[1]  Milad Shokouhi,et al.  Expected browsing utility for web search evaluation , 2010, CIKM.

[2]  Ellen M. Voorhees,et al.  Variations in relevance judgments and the measurement of retrieval effectiveness , 1998, SIGIR '98.

[3]  Andrew Trotman,et al.  Sound and complete relevance assessment for XML retrieval , 2008, TOIS.

[4]  Fabio Paternò,et al.  Human-Computer Interaction - INTERACT 2005 , 2005, Lecture Notes in Computer Science.

[5]  Pia Borlund,et al.  Experimental components for the evaluation of interactive information retrieval systems , 2000, J. Documentation.

[6]  Alistair Moffat,et al.  Fading Away: Dilution and User Behaviour , 2013, EuroHCIR.

[7]  Ellen M. Voorhees Variations in relevance judgments and the measurement of retrieval effectiveness , 2000, Inf. Process. Manag..

[8]  Päivi Majaranta,et al.  Eye-Tracking Reveals the Personal Styles for Search Result Evaluation , 2005, INTERACT.

[9]  Jaime Arguello,et al.  Grannies, tanning beds, tattoos and NASCAR: evaluation of search tasks with varying levels of cognitive complexity , 2012, IIiX.

[10]  Jaana Kekäläinen,et al.  Cumulated gain-based evaluation of IR techniques , 2002, TOIS.

[11]  Stephen E. Robertson,et al.  A new interpretation of average precision , 2008, SIGIR '08.

[12]  Diane Kelly,et al.  Questionnaire mode effects in interactive information retrieval experiments , 2008, Inf. Process. Manag..

[13]  Thorsten Joachims,et al.  Accurately interpreting clickthrough data as implicit feedback , 2005, SIGIR '05.

[14]  Susan T. Dumais,et al.  Individual differences in gaze patterns for web search , 2010, IIiX.

[15]  Alistair Moffat,et al.  Users versus models: what observation tells us about effectiveness metrics , 2013, CIKM.

[16]  Charles L. A. Clarke,et al.  Time-based calibration of effectiveness measures , 2012, SIGIR '12.

[17]  Thorsten Joachims,et al.  Eye-tracking analysis of user behavior in WWW search , 2004, SIGIR '04.

[18]  Olivier Chapelle,et al.  A dynamic bayesian network click model for web search ranking , 2009, WWW '09.

[19]  Alistair Moffat,et al.  Rank-biased precision for measurement of retrieval effectiveness , 2008, TOIS.

[20]  David Hawking,et al.  Relative effect of spam and irrelevant documents on user interaction with search engines , 2011, CIKM '11.