Mark-up Barking Up the Wrong Tree
暂无分享,去创建一个
The interest in machine-learning methods to solve natural-language-understanding problems has led to the use of textual annotation as an important auxiliary technique. Grammar induction based on annotation has been very successful for the Penn Treebank, where a corpus of English text was annotatedwith syntactic information. This shining example has inspired a plethora of annotation efforts: corpora are annotated for ‘coreference’, for animacy, for expressions of opinions, for temporal dependencies, for the estimated duration of the activities that the expressions refer to, and so on. It is not clear though that these efforts are bound to repeat the success of the Penn Treebank. The circumstances in which the Penn Treebank project was executed are vastly different from those in which most annotation tasks take place. First, the annotation was a linguistic task and one about which there is reasonable agreement. People might quibble about the way to represent certain constituent structure distinctions in English, but they do, in general, not disagree about the distinctions themselves; and if you don’t like the Treebank as is, you can translate it into your favorite format. Second, theworkwas done by advanced students who understood the task andwere supervised by specialists in the field. Third, this was not done in a hurry. The project started in 1989 and the corpora are still maintained and the annotations improved. About the only thing that this project has in common with the bulk of annotation tasks is that the annotators were not very well paid. Currently, we see annotation schemas being developed for phenomena that are much less well understood than constituent structure. In workshops and conferences we hear lively discussions about interannotator agreement, about tools to make the annotation task easier, about how to cope with multiple annotations of the same text, about the development of international standards for annotation schemes in specific subdomains, and, most importantly, about the statistical models that can be built once the annotations are in place. One thing that is much less discussed is whether the annotation indeed helps isolate the property that motivated it in the first place. This is not the same as interannotator agreement. For interannotator agreement, it suffices that all annotators do the same thing. But even with full annotator agreement it is not sure that the task captures what was initially intended. Assume that I want to mark all the entities in a text that refer to the same entity with the same number and I tell my annotators “Whenever you see the word Chicago, give it the same number”: I’ll get great interannotator agreement with that guideline but it is debatable whether I will realize my proclaimed aim of classifying references to one and the same entity in the outside world. Presumably, I would like to catch all the references to the city of Chicago, but Chicago pizza is made and sold all over the United States, and the relation
[1] Kees van Deemter,et al. Coreference Annotation: Whither? , 2000, LREC.
[2] PoesioMassimo,et al. A corpus-based investigation of definite description use , 1998 .
[3] Karen Spärck Jones. Computational Linguistics: What About the Linguistics? , 2007, Computational Linguistics.
[4] Renata Vieira,et al. A Corpus-based Investigation of Definite Description Use , 1997, CL.