A contextual computing approach may prove a breakthrough in personalized search efficiency

Contextual computing refers to the enhancement of a user's interactions by understanding the user, the context, and the applications and information being used, typically across a wide set of user goals. Contextual computing is not just about model-ing user preferences and behavior or embedding computation everywhere, it's about actively adapting the computational environment—for each and every user—at each point of computation. With respect to personalized search, the contex-tual computing approach focuses on understanding the information consumption patterns of each user, the various information foraging strategies [3] and applications they employ, and the nature of the information itself. Focusing on the user enables a shift from what we call " consensus relevancy " where the computed relevancy for the entire population is presumed relevant for each user, toward personal rel-evancy where relevancy is computed based on each individual within the context of their interactions. The benefits of personalized search can be significant, appreciably decreasing the time it takes people— novices and experts alike—to find information. Here, we review the evolution of the field of information retrieval (IR) [4], setting the stage for examining how a search can be personalized, with particular emphasis on the Web. We then describe the Outride system, and review a set of experiments. The field of IR has evolved from analyzing the letters and words that make up the content of documents to the integration of intrinsic document properties like citations and hyperlinks to the incorporation of usage data. Content-based approaches such as statistical and natural language techniques provide results that contain a specific set of words or meaning, but cannot differentiate which documents in a collection are the ones really worth reading. This need gave rise to a set of methods we refer to as " author relevancy " techniques. By computing what the most respected authors deem important, citation and hyperlink approaches provide an implicit measure of importance. However, these techniques can create an authoring bias where the meaning and resources valued by a group of authors determine the results for the entire user population. Imagine for a moment if the Java programming language was called something different. A query for the term " java " on the Web would produce a different set of results, likely about coffee, which is probably closer to most users' expectations. Additionally, a ranking bias can occur when, for a given topic, the authoring community values a different set …