All of BBN's research under the TIPSTER III program has focused on doing extraction by applying statistical models trained on annotated data, rather than by using programs that execute hand-written rules. Within the context of MUC-7, the SIFT system for extraction of template entities (TE) and template relations (TR) used a novel, integrated syntactic/semantic language model to extract sentence level information, and then synthesized information across sentences using in part a trained model for cross-sentence relations. At the named entity (NE) level as well, in both MET-1 and MUC-7, BBN employed a trained, HMM-based model.The results in these TIPSTER evaluations are evidence that such trained systems, even at their current level of development, can perform roughly on a par with those based on rules hand-tailored by experts. In addition, such trained systems have some significant advantages:• They can be easily ported to new domains by simply annotating fresh data.• The complex interactions that make rule-based systems difficult to develop and maintain can here be learned automatically from the training data.We believe that improved and extended versions of such trained models have the potential for significant further progress toward practical systems for information extraction.
[1]
Beatrice Santorini,et al.
Building a Large Annotated Corpus of English: The Penn Treebank
,
1993,
CL.
[2]
Richard M. Schwartz,et al.
Nymble: a High-Performance Learning Name-finder
,
1997,
ANLP.
[3]
Joshua Goodman,et al.
Global Thresholding and Multiple-Pass Parsing
,
1997,
EMNLP.
[4]
Michael Collins,et al.
A New Statistical Parser Based on Bigram Lexical Dependencies
,
1996,
ACL.
[5]
Richard M. Schwartz,et al.
Coping with Ambiguity and Unknown Words through Probabilistic Models
,
1993,
CL.
[6]
Michael Collins,et al.
Three Generative, Lexicalised Models for Statistical Parsing
,
1997,
ACL.