Crowdsourcing interactions: Capturing query sessions through crowdsourcing

The TREC evaluation paradigm, developed from the Cranfield experiments, typically considers the effectiveness of information retrieval (IR) systems when retrieving documents for an isolated query. A step forward towards a robust evaluation of interactive information retrieval systems has been achieved by the TREC Session Track, which aims to evaluate retrieval performance of systems over query sessions. Its evaluation protocol consists of artificially generated reformulations of initial queries extracted from other TREC tasks, and relevance judgements made by NIST assessors. This procedure is mainly due to the difficulty of accessing session logs and because interactive experiments are expensive to conduct. In this paper we outline a protocol for acquiring user interactions with IR systems based on crowdsourcing. We show how real query sessions can be captured in an inexpensive manner, without resorting to commercial query logs.