Applying One Reason Decision-making: The Prioritisation of Literature Searches

The prioritisation of literature searches aims to order the large numbers of articles returned by a simple search so that the ones most likely to be relevant are at the top of the list. Prioritisation relies on having a good model of human decision-making that can learn from the articles users select as being relevant to make predictions about which of the remaining articles will be relevant. We develop and evaluate two psychological decision-making models for prioritisation: A "rational" model that considers all of the available information, and a "one reason" model that uses limited information to make decisions. The models are evaluated in an experiment where users rate the relevance of every article returned by PsyclNFO for a number of different research topics, with the results showing that both models achieve a level of prioritisation that significantly improves upon the default ordering of PsycINFO. The one reason model is shown to be superior to the rational model, especially when there are only a few relevant articles. The implications of the results for developing prioritisation systems in applied settings are discussed, together with implications for the general modeling of human decision-making. yvTh en a researcher first does a literature search, they Wv usually are only able to supply general search criteria, such as one or two keywords, to indicate their broad topic of interest. Typically, these initial searches will return a large number of potentially relevant articles. Faced with this information overload, one option for the researcher is to refine their search, and hope that a more manageable list of articles is returned. Often, however, this refinement is difficult, because the researcher is unsure exactly what sorts of materials are available, and there is a need to "sample" or "explore" the large initial list of articles before a more detailed search can be constructed with any confidence. Prioritisation offers a different approach to dealing with the information overload. The basic idea is to begin presenting the articles, requiring the user to indicate whether or not that article is of interest. As each article is examined, prioritisation acts to re-order the remaining articles so that the relevant ones are placed at the top of the list. If prioritisation is effective, the problem of information overload is solved without the user ever having to construct a refined search. They only need to work from the top of the prioritised list until they reach the point where the articles are no longer of sufficient relevance to be worth pursuing. While the prioritisation problem has been tackled in a variety of information retrieval contexts using machine learning techniques (e.g., Balabanovic, 1998; Macskassy, Dayanik, & Hirsch 1999; Mehran, Dumais, Heckerman, & Horvitz 1998), it has typically not been tackled from a cognitive modeling perspective. This is unfortunate, because prioritisation rests on the ability to predict whether or not a user will evaluate an article as a relevant one, and so requires an effective model of human decision-making to be successful. In this paper, we develop and evaluate two cognitive models for the prioritisation of literature searches. One is a "rational" model, that performs exhaustive calculations, while the other is a "one reason" model, that requires only limited time by making assumptions about the nature of its environment. In the next section, we describe how literature searches are represented by these models, and how information about them is learned. We then describe the two models in detail, before presenting the results of an experiment where both are evaluated on real-world data. Finally, we draw some conclusions regarding the theoretical implications of the results for understanding human decision-making, and the applied implications for building a literature search prioritisation system.

[1]  Shlomo Zilberstein,et al.  Models of Bounded Rationality , 1995 .

[2]  S. Pinker How the Mind Works , 1999, Annals of the New York Academy of Sciences.

[3]  J. Doyle Rational Decision Making , 1998 .

[4]  A. Tversky Features of Similarity , 1977 .

[5]  Michael D. Lee,et al.  Using Cognitive Decision Models to Prioritize E-mails , 2019, Proceedings of the Twenty-Fourth Annual Conference of the Cognitive Science Society.

[6]  L. Komatsu Recent views of conceptual structure , 1992 .

[7]  R. Shepard Representation of structure in similarity data: Problems and prospects , 1974 .

[8]  R N Shepard,et al.  Multidimensional Scaling, Tree-Fitting, and Clustering , 1980, Science.

[9]  Michael D. Lee Generating Additive Clustering Models with Minimal Stochastic Complexity , 2002, J. Classif..

[10]  E. Brunswik Organismic achievement and environmental probability. , 1943 .

[11]  Jonathan D. Cohen,et al.  Highlights: Language- and Domain-Independent Automatic Indexing Terms for Abstracting , 1995, J. Am. Soc. Inf. Sci..

[12]  R. A. Brooks,et al.  Intelligence without Representation , 1991, Artif. Intell..

[13]  P. Hofstaetter [Similarity]. , 2020, Psyche.

[14]  H. Simon,et al.  Rational choice and the structure of the environment. , 1956, Psychological review.

[15]  D. S. Sivia,et al.  Data Analysis , 1996, Encyclopedia of Evolutionary Psychological Science.

[16]  Susan T. Dumais,et al.  A Bayesian Approach to Filtering Junk E-Mail , 1998, AAAI 1998.

[17]  Haym Hirsh,et al.  EmailValet: Learning User Preferences for Wireless Email , 1999 .

[18]  P M Todd,et al.  Précis of Simple heuristics that make us smart , 2000, Behavioral and Brain Sciences.

[19]  G Gigerenzer,et al.  Reasoning the fast and frugal way: models of bounded rationality. , 1996, Psychological review.

[20]  Joshua B. Tenenbaum,et al.  Learning the Structure of Similarity , 1995, NIPS.

[21]  David Lindley,et al.  Bayesian Statistics, a Review , 1987 .

[22]  M. Lee Are these two groups of scores significantly different ? A Bayesian approach , 2002 .

[23]  Michael D. Lee,et al.  Neural Feature Abstraction from Judgments of Similarity , 1998, Neural Computation.

[24]  I. J. Myung,et al.  Toward a method of selecting among computational models of cognition. , 2002, Psychological review.

[25]  Roger N. Shepard,et al.  Additive clustering: Representation of similarities as combinations of discrete overlapping properties. , 1979 .

[26]  Marko Balabanovic,et al.  Exploring Versus Exploiting when Learning User Models for Text Recommendation , 2004, User Modeling and User-Adapted Interaction.

[27]  P. Todd,et al.  Simple Heuristics That Make Us Smart , 1999 .

[28]  Douglas L. Medin,et al.  Context theory of classification learning. , 1978 .

[29]  H Pashler,et al.  How persuasive is a good fit? A comment on theory testing. , 2000, Psychological review.

[30]  L. Wasserman,et al.  Computing Bayes Factors by Combining Simulation and Asymptotic Approximations , 1997 .