Making Sense out of Food

Making Sense out of Food Brent Kievit-Kylar (bkievitk@indiana.edu) Cognitive Science Program, IU Bloomington, IN 47405 USA Yong-Yeol Ahn (yyahn@indiana.edu) School of Informatics and Computing, IU Bloomington, IN 47405 USA Peter M. Todd (pmtodd@indiana.edu) Cognitive Science Program, IU Bloomington, IN 47405 USA information (e.g., so-called McRae features that people generate to describe objects—McRae 2005). Similarly, multi-modal information from objective measures of the visual, gustatory, and olfactory modalities along with subjective semantic and featural representations has been shown to have significant cross-modal predictive power (Kievit-Kylar & Jones 2012a,b): Information about an object in one sensory modality can provide significant information on what that object’s representation is in another modality. By combining information about an object across multiple modalities, the prediction of the unknown modality improves further. Unfortunately, collecting objective similarity measures based on physical features in various sensory domains is a difficult and expensive task, requiring specialized equipment for smell, taste, and touch information. Also, the resulting measures computed by collecting this information do not necessarily reflect the same sort of information available to and used by humans when they make their own similarity judgments (e.g., due to nonlinearities of senses as well as potential mismatch between the features that can be detected by humans versus machines). Here we use a novel technique based on a fluency and grouping task to collect subjective similarity information across multiple sensory domains. This data is used to test the hypothesis that, overall, different sensory modalities tend to conserve the same similarity relations among a set of objects, coding overlapping information. At the same time, the unique variance contained in the details of those sensory modalities is critical to understanding the relationships of these objects. To show this, we use cross-modal data we collected about different types of food. The category of food is useful for this exploration because foods are fundamental objects for humans, and people have rich multi-sensory conceptions of various foods in terms of modalities including visual, olfactory, taste, and tactile (we did not include aural). We then compare the subjective representations obtained from people between sensory domains as well as to existing objective data within domains (e.g., comparing how similar people judge the smell between two objects with how much their composition of volatile chemicals overlaps) to assess the extent of shared information across sensory domains for foods. Abstract In this paper we explore the application of a novel data collection scheme for multi-sensory information to the question of whether different sensory domains tend to show similar relations between objects (along with some unique variance). Our analyses—hierarchical clustering, MDS mapping, and other comparisons between sensory domains— support the existence of common representational schemes for food items in the olfactory, taste, visual, and tactile domains. We further show that the similarity within different sensory domains is a predictor for Rosch (1975) typicality measures. We also use the relative importance of sensory domains to predict the overall similarity between pairs of words, and compare subjective similarities to objective similarities based on physical sensory properties of the foods, showing a reasonable match. Keywords: Multi-sensory; data collection; typicality. Introduction While humans are primarily visual creatures (Barton 1998, 1995) we rely on all of our senses to function in the real world. If early humans judged whether food had gone bad only from sight without use of smell, they would have had a lower survival rate. The use of multisensory information is ingrained in our world representations so deeply that it is often encountered in pre-conscious tasks such as priming (Pecher 1998). But how much distinctive information do the different sensory domains provide about objects? Are exceptional objects in one sensory domain unexceptional in others, or do the different senses tend to provide largely overlapping information about objects? Addressing these questions and understanding the structure of multimodal sensory representations may provide critical insights for building better semantic space models, understanding language acquisition, and modeling memory phenomena including priming. Here we take an initial step by introducing a crowdsourcing framework for collecting multi-sensory object information, and ways of analyzing it. In previous work, Kievit-Kylar & Jones (2011) showed that carefully collected visual information could be used as a successful predictor for people’s judgments of overall similarity between objects, and that this predictor captured variance different from that supplied by semantic models based on text corpora analysis (e.g., Dumais et al, 1997, Jones et al 2006, Lund and Burgess 1996) and featural

[1]  N. Henley A psychological study of the semantics of animal terms , 1969 .

[2]  Sergio Gómez,et al.  Solving Non-Uniqueness in Agglomerative Hierarchical Clustering Using Multidendrograms , 2006, J. Classif..

[3]  E. Rosch Cognitive Representations of Semantic Categories. , 1975 .

[4]  Meyer Williams,et al.  Psychological Study , 1982 .

[5]  Brent Kievit-Kylar,et al.  The Semantic Pictionary Project , 2011, CogSci.

[6]  Mark S. Seidenberg,et al.  Semantic feature production norms for a large set of living and nonliving things , 2005, Behavior research methods.

[7]  Mary Roach Gulp: Adventures on the Alimentary Canal , 2013 .

[8]  T. Landauer,et al.  A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge. , 1997 .

[9]  J. Blechert,et al.  FOOD.PICS: A PICTURE DATABASE FOR THE STUDY OF EATING AND APPETITE , 2012 .

[10]  W. Kintsch,et al.  High-Dimensional Semantic Space Accounts of Priming. , 2006 .

[11]  Michael N. Jones,et al.  Visualizing multiple word similarity measures , 2012, Behavior research methods.

[12]  R. Barton Visual specialization and brain evolution in primates , 1998, Proceedings of the Royal Society of London. Series B: Biological Sciences.

[13]  P. Harvey,et al.  Evolutionary radiation of visual and olfactory brain systems in primates, bats and insectivores. , 1995, Philosophical transactions of the Royal Society of London. Series B, Biological sciences.

[14]  Albert-László Barabási,et al.  Flavor network and the principles of food pairing , 2011, Scientific reports.

[15]  Curt Burgess,et al.  Producing high-dimensional semantic spaces from lexical co-occurrence , 1996 .

[16]  J. Raaijmakers,et al.  Does pizza prime coin? Perceptual priming in lexical decision and pronunciation. , 1998 .