Creativity Evaluation through Latent Semantic Analysis

Creativity Evaluation through Latent Semantic Analysis Eve A. Forster (eve.forster@utoronto.ca) University of Toronto Scarborough, Department of Psychology 1265 Military Trail, Toronto, ON M1C 1A4 Canada Kevin N. Dunbar (dunbar@utsc.utoronto.ca) University of Toronto Scarborough, Department of Psychology 1265 Military Trail, Toronto, ON M1C 1A4 Canada Abstract in a person’s internal and external environment, causing different subscales such as drawing or writing to fluctuate in different ways. The psychometric approach accommodates these multiple factors by administering a large battery of short tests, to encapsulate all aspects of creativity. Most of the tests require people to generate or manipulate a large number of ideas. Guilford and Hoepfner (1966) provide 57 tasks that ask participants to do things such as grouping and regrouping objects according to common properties, listing the consequences of unlikely situations, and the UoO Task. While it is almost 60 years since Guilford’s original address to APA, the scoring of creativity tasks still remains problematic. One way of addressing these problems would be to use an automated measurement tool that uses underlying semantic knowledge to assess creativity. Although initially developed to model language learning, Latent Semantic Analysis (LSA) has since proven itself as a flexible tool with a variety of sophisticated uses. In this article we test the hypothesis that it can be used as a consistent and completely automated creativity scoring method. Here, we use LSA to score creativity of participants who perform the Uses of Objects task. The Uses of Objects Task is a widely used creativity test. The test is usually scored by humans, which introduces subjectivity and individual variance into creativity scores. Here, we present a new computational method for scoring creativity: Latent Semantic Analysis (LSA), a tool used to measure semantic distance between words. 33 participants provided creative uses for 20 separate objects. We compared both human judges and LSA scores and found that LSA methods produced a better model of the underlying semantic originality of responses than traditional measures. Keywords: latent semantic analysis; creativity; natural language processing Creativity research has had a short, but interesting history in the Cognitive Sciences. Beginning with Guilford’s presidential address to the American Psychological Association in 1950, researchers have sought ways of discovering creative individuals (Guilford, 1947) that provide an alternative to the long and laborious methods used by the Gestalt psychologists. Gestalt research methods often consisted of extensive interviews with creative individuals (such as Albert Einstein; Wertheimer, 1945), which offered fascinating accounts of creative moments of some creative people, but was not amenable to discovering vast numbers of creative individuals. Guilford advocated the use of the psychometric approach for this purpose, and over the subsequent decade a number of new creativity tests were devised. By the mid- 1960s, the Guilford Alternate Uses test (Guilford, 1967) and the Torrance Test of Creative Thinking (Torrance, 1998) were widely used measures of creativity across the world. On the surface, these tests were ideal; they were easy to administer and quick to score: the more responses made, the more creative the individual. Researchers soon discovered that there was more to creativity than number of responses. New measures were proposed that counted number of categories employed, and measured response elaboration and novelty. These measures brought new problems to creativity assessment: they are inherently subjective, have large variances in coding, and take a considerable amount of time to score. Sternberg and Lubart (1992) write that creativity is a function of six factors: intelligence, knowledge, thinking style, personality, motivation and environmental context. Each of these can fluctuate from day to day due to changes The Uses of Objects Task The UoO Task is a psychometric test that requires people to generate multiple, original uses for a given object. Quantitative scores count the number of ideas (a measure of fluency) or number of words per response (elaboration), and subjective scores judge creativity and category switching. The task is widely used (Dunbar, 2008; Guilford, 1967; Guilford & Hoepfner, 1966; Hudson, 1968; Torrance, 1998). Scoring of the UoO task can be easily automated, but doing so strips the responses of their meaning. The only two scoring options at present are meaningful but subjective and slow, or consistent and fast but meaningless. The ideal scoring method should be meaningful, consistent and completely automated; such a method may be devised by combining a traditional elaboration measure with a novel assessment of originality. The need for consistent measurement Popular scoring systems such as the Torrance Test of Creative Thinking (Torrance, 1998) require a trained person to assess productions, but this option is not always practical. Such assessment is slow, expensive, and subjective (and

[1]  Richard A. Harshman,et al.  Indexing by Latent Semantic Analysis , 1990, J. Am. Soc. Inf. Sci..

[2]  Teresa M. Amabile,et al.  Assessing the Work Environment for Creativity , 1996 .

[3]  Carlo Strapparava,et al.  LEARNING TO LAUGH (AUTOMATICALLY): COMPUTATIONAL MODELS FOR HUMOR RECOGNITION , 2006, Comput. Intell..

[4]  John D. Nisbet,et al.  Contrary imaginations: a psychological study of the english schoolboy , 1966 .

[5]  W. Kintsch Metaphor comprehension: A computational theory , 2000, Psychonomic bulletin & review.

[6]  R. Helson,et al.  In Search of the Creative Personality , 1996 .

[7]  Keith A Hutchison,et al.  Is semantic priming due to association strength or feature overlap? A microanalytic review , 2003, Psychonomic bulletin & review.

[8]  Françoise Bacher Contrary Imaginations. A psychological study of the English schoolboy, L. Hudson. 1966 , 1967 .

[9]  R. Sternberg Handbook of Creativity: Subject Index , 1998 .

[10]  T. Landauer,et al.  A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge. , 1997 .

[11]  J. Guilford,et al.  Sixteen Divergent-Production Abilities at the Ninth-Grade Level , 1966 .

[12]  Hyunsoo Kim,et al.  Extracting unrecognized gene relationships from the biomedical literature via matrix factorizations , 2007, BMC Bioinformatics.

[13]  Roni Reiter-Palmon,et al.  Encyclopedia of Creativity , 2011 .

[14]  Patrick F. Reidy An Introduction to Latent Semantic Analysis , 2009 .

[15]  Peter W. Foltz,et al.  The intelligent essay assessor: Applications to educational technology , 1999 .

[16]  R. Sternberg,et al.  Creativity Its Nature and Assessment , 1992 .

[17]  Mark S. Seidenberg,et al.  Semantic feature production norms for a large set of living and nonliving things , 2005, Behavior research methods.

[18]  Peter W. Foltz,et al.  An introduction to latent semantic analysis , 1998 .

[19]  A. Glenberg,et al.  Symbol Grounding and Meaning: A Comparison of High-Dimensional and Embodied Theories of Meaning , 2000 .

[20]  Bob Rehder,et al.  Using latent semantic analysis to assess knowledge: Some technical considerations , 1998 .

[21]  R. McCrae Creativity, divergent thinking, and openness to experience. , 1987 .

[22]  E. Fluder,et al.  Latent semantic structure indexing (LaSSI) for defining chemical similarity. , 2001, Journal of medicinal chemistry.

[23]  J. Guilford,et al.  The nature of human intelligence. , 1968 .

[24]  J. Plucker,et al.  Handbook of Creativity: Psychometric Approaches to the Study of Human Creativity , 1998 .

[25]  J. Guilford The Discovery of Aptitude and Achievement Variables. , 1947, Science.

[26]  Darrell Laham,et al.  Latent Semantic Analysis Approaches to Categorization , 1997 .