Personalization of search results using interaction behaviors in search sessions

Personalization of search results offers the potential for significant improvement in information retrieval performance. User interactions with the system and documents during information-seeking sessions provide a wealth of information about user preferences and their task goals. In this paper, we propose methods for analyzing and modeling user search behavior in search sessions to predict document usefulness and then using information to personalize search results. We generate prediction models of document usefulness from behavior data collected in a controlled lab experiment with 32 participants, each completing uncontrolled searching for 4 tasks in the Web. The generated models are then tested with another data set of user search sessions in radically different search tasks and constrains. The documents predicted useful and not useful by the models are used to modify the queries in each search session using a standard relevance feedback technique. The results show that application of the models led to consistently improved performance over a baseline that did not take account of user interaction information. These findings have implications for designing systems for personalized search and improving user search experience.

[1]  Nicholas J. Belkin,et al.  Personalizing information retrieval for multi-session tasks: the roles of task stage and task type , 2010, SIGIR '10.

[2]  Steve Fox,et al.  Evaluating implicit measures to improve web search , 2005, TOIS.

[3]  Ben Carterette,et al.  Overview of the TREC 2011 Session Track , 2011, TREC.

[4]  Ryen W. White,et al.  A study on the effects of personalization and task information on implicit feedback performance , 2006, CIKM '06.

[5]  Nicholas J. Belkin,et al.  A faceted approach to conceptualizing tasks in information seeking , 2008, Inf. Process. Manag..

[6]  Ryen W. White,et al.  A study of factors affecting the utility of implicit relevance feedback , 2005, SIGIR '05.

[7]  Nicholas J. Belkin,et al.  Rutgers' TREC-6 Interactive Track Experience , 1997, TREC.

[8]  Olivier Chapelle,et al.  Expected reciprocal rank for graded relevance , 2009, CIKM.

[9]  Susan T. Dumais,et al.  Potential for personalization , 2010, TCHI.

[10]  Leo Breiman,et al.  Classification and Regression Trees , 1984 .

[11]  Pia Borlund,et al.  The IIR evaluation model: a framework for evaluation of interactive information retrieval systems , 2003, Inf. Res..

[12]  Steven Bird,et al.  NLTK: The Natural Language Toolkit , 2002, ACL 2006.

[13]  Diane Kelly,et al.  IMPLICIT FEEDBACK: USING BEHAVIOR TO INFER RELEVANCE , 2005 .

[14]  Steven Bird,et al.  NLTK: The Natural Language Toolkit , 2002, ACL.

[15]  Susan T. Dumais,et al.  Learning user interaction models for predicting web search result preferences , 2006, SIGIR.

[16]  Jacek Gwizdka,et al.  Helping identify when users find useful documents: examination of query reformulation intervals , 2010, IIiX.

[17]  Nicholas J. Belkin,et al.  Display time as implicit feedback: understanding task effects , 2004, SIGIR '04.

[18]  Jaime Teevan,et al.  Implicit feedback for inferring user preference: a bibliography , 2003, SIGF.

[19]  Nicholas J. Belkin,et al.  Rutgers' TREC 2001 Interactive Track Experience , 2001, TREC.

[20]  Jacek Gwizdka,et al.  A User-Centered Experiment and Logging Framework for Interactive Information Retrieval , 2009, UIIR@SIGIR.