Web search functionality is increasingly integrated into operating systems, software applications, and other interactive environments that extend beyond the traditional web browser. In particular, intelligent virtual assistants (e.g., Microsoft Cortana or Apple Siri) often "fall-back" to generic web search in cases where utterances fall outside the set of scenarios known to the agent. In this paper we analyze a 3 month log of web search queries posed via the Cortana virtual assistant. We report that, in this environment, users frequently ask questions that implicitly pertain to the systems or devices from which they are searching (e.g., asking: [how do I take a screenshot]). Unfortunately, accurately answering these implicit system queries poses significant challenges to general web search engines, due in part to the lack of available context. We show that such queries: (1) can be detected with high precision, (2) are common, and (3) can be automatically reformulated to substantially improve retrieval performance in these fall-through scenarios.
[1]
Tovi Grossman,et al.
Searching for software learning resources using application context
,
2011,
UIST.
[2]
Luis Gravano,et al.
Categorizing web queries according to geographical locality
,
2003,
CIKM '03.
[3]
Wei-Ying Ma,et al.
Probabilistic model for contextual retrieval
,
2004,
SIGIR '04.
[4]
Benjamin Rey,et al.
Generating query substitutions
,
2006,
WWW '06.
[5]
Geoffrey Zweig,et al.
Clustering novel intents in a conversational interaction system with semantic parsing
,
2015,
INTERSPEECH.
[6]
Junghoo Cho,et al.
Automatically identifying localizable queries
,
2008,
SIGIR '08.
[7]
Wei Chu,et al.
Modeling the impact of short- and long-term behavior on search personalization
,
2012,
SIGIR '12.
[8]
Gökhan Tür,et al.
Detecting out-of-domain utterances addressed to a virtual personal assistant
,
2014,
INTERSPEECH.