This paper introduces an overview of the speech corpus of Japanese learner English compiled by National Institute of Information and Communications Technology by showing its data collection procedure and annotation schemes including error tagging. We have collected 1,200 interviews for three years. One of the most unique features of this corpus is that it contains rich information on learners’ errors. We have performed error tagging for learners’ grammatical and lexical errors with originally-designed error tagset. We also evaluated the corpus through the experiment on automatic detection of learners’ errors by using error tag information in the corpus. We did this by using a machine learning model, Maximum Entropy (ME) model. Since we had obtained the limited amount of error-tagged data, we needed to make some efforts to enlarge training data. We added the correct sentences and artificially-made errors to the training data, and found that it improved accuracy. We are planning to make this corpus publicly available in the spring of 2004, so that teachers and researchers in many fields can use the data for their own interests, such as second language acquisition research, syllabus and material design, or the development of computerized pedagogical tools, by combining it with NLP technology.
[1]
Francis Bond,et al.
Memory-Based Learning for Article Generation
,
2000,
CoNLL/LLL.
[2]
I. Good,et al.
The Maximum Entropy Formalism.
,
1979
.
[3]
Mitchell P. Marcus,et al.
Text Chunking using Transformation-Based Learning
,
1995,
VLC@ACL.
[4]
Helmut Schmidt,et al.
Probabilistic part-of-speech tagging using decision trees
,
1994
.
[5]
E. Jaynes.
Information Theory and Statistical Mechanics
,
1957
.
[6]
E. T. Jaynes,et al.
Where do we Stand on Maximum Entropy
,
1979
.