Given a linguistic theory with constraints/parameters/rules of any generality, learners are faced with what Dresher (1999) calls the credit problem. In terms of Optimality Theory (OT: Prince and Smolensky 1993/2004), there is usually more than one constraint that favors an optimal form over any one of its competitors, and that can be ranked by the learner over the constraints preferring the competitor. One of the most attractive properties of the constraint demotion algorithm (CDA: Tesar and Smolensky 1998) is that it finds a ranking that correctly deals with the learning data (if one exists), without directly stipulating a solution to the credit problem (cf., does not share this property: it fails to converge when presented with a set of data that has multiple interacting instantiations of the credit problem. The advantage of the GLA over the CDA is that it handles variation; in the second part of the paper, I sketch an approach to the learning of variation that makes use of the inconsistency detection properties of the CDA (Tesar 1998). A piece of learning data in Optimality Theory consists of an input and the optimal output, or Winner, paired with another output candidate, or Loser (a mark-data pair or M-D pair). Constraints are annotated for whether they prefer the Winner or the Loser (this is an alternative to showing the violation marks incurred by each; see esp. Prince 2003). Using this notation, a simple instance of the credit problem would be as in (1), where two constraints (Con1 and Con3) favor the winner, and Con 2 favors the loser.
[1]
J. McCarthy.
Taking a Free Ride in Morphophonemic Learning
,
2005
.
[2]
P. Smolensky,et al.
Optimality Theory: Constraint Interaction in Generative Grammar
,
2004
.
[3]
Mitsuhiko Ota,et al.
The learnability of the stratified phonological lexicon
,
2004
.
[4]
Mark Johnson,et al.
Learning OT constraint rankings using a maximum entropy model
,
2003
.
[5]
P. Boersma.
How we learn variation, optionality and probalility
,
1997
.
[6]
Joe Pater.
Exceptions in Optimality Theory: Typology and Learnability
,
2004
.
[7]
Michael Hammond,et al.
Gradience, Phonotactics and the Lexicon in English Phonology
,
2004
.
[8]
Joe Pater,et al.
The Locus of Exceptionality: Morpheme-Specific Phonology as Constraint Indexation
,
2007
.
[9]
A. Anttila.
Deriving Variation from Grammar
,
1997
.
[10]
Bruce Tesar,et al.
Surgery in Language Learning
,
2003
.
[11]
Bezalel Elan Dresher,et al.
Charting the Learning Path: Cues to Parameter Setting
,
1999,
Linguistic Inquiry.
[12]
P. Boersma,et al.
Empirical Tests of the Gradual Learning Algorithm
,
2001,
Linguistic Inquiry.