Most words in natural language have more than one possible meaning. This seemingly simple observation leads to tremendous challenges in theoretical and computational linguistics, as is clearly shown in a volume of ten newly commissioned articles entitled Polysemy: Theoretical and Computational Approaches, edited by Yael Ravin and Claudia Leacock. Words may be thought of as occupying a spectrum of meaning, with synonyms at one end and homonyms at the other. Synonyms are different word forms that refer to the same concept, like when bank and shore refer to the side of a river. A homonym is a single word form that refers to multiple distinct concepts. For example, bank is a homonym when it refers to a financial institution or to the side of a river. These are completely distinct concepts that happen to be represented by the same string of characters. While there are clear-cut synonyms and homonyms in natural language, most words lie somewhere in between and are said to be polysemous. bank is polysemous when it refers to a financial institution or a blood blank, since these are related, but not identical, meanings. The scope of work represented in this volume is impressive. Chapters 2–4, and 6 focus on linguistically oriented studies of lexical semantics. Chapters 5, 7, and 8 offer critiques of current dictionaries and lexicography, and Chapters 9–11 present computational approaches to representing word meanings. Given this wide variety, the editors have very wisely provided an extensive introduction in Chapter 1. This is invaluable to any reader who is not familiar with lexical semantics, lexicography, or computational modeling (and it will be an unusual reader who is expert in all three). The difficulty in making precise distinctions in word meanings is taken up in Chapter 2, ‘Aspects of the Micro-structure of Word Meanings’, by D. Alan Cruse. Cruse argues that words cannot be defined independent of their context, and that the set of context-invariant semantic properties of words is insufficiently large to act as an adequate foundation upon which to specify the meanings of words. Rather than relying on discrete cutoff points to characterize the semantics of a word, he advocates a continuum or gradient scale that allows word meanings to be related more flexibly. This article sets the stage for many that follow, since the difficulty of making precise distinctions in word senses is what makes lexicography a challenging enterprise, and motivates much work in computational modeling of word meanings based on evidence found in large corpora.
[1]
Adam Kilgarriff,et al.
"I Don’t Believe in Word Senses"
,
1997,
Comput. Humanit..
[2]
George A. Miller,et al.
Using Corpus Statistics and WordNet Relations for Sense Identification
,
1998,
CL.
[3]
Yaacov Choueka,et al.
Disambiguation by short contexts
,
1985,
Comput. Humanit..
[4]
A. Wierzbicka.
Semantics: Primes and Universals
,
1996
.
[5]
G. Miller,et al.
How children learn words.
,
1987,
Scientific American.
[6]
Abraham Kaplan,et al.
An experimental study of ambiguity and context
,
1955,
Mech. Transl. Comput. Linguistics.
[7]
G. Miller,et al.
Contextual correlates of semantic similarity
,
1991
.
[8]
Sidney I. Landau.
Dictionaries: The Art and Craft of Lexicography
,
1985
.