A compositional context sensitive multi-document summarizer: exploring the factors that influence summarization

The usual approach for automatic summarization is sentence extraction, where key sentences from the input documents are selected based on a suite of features. While word frequency often is used as a feature in summarization, its impact on system performance has not been isolated. In this paper, we study the contribution to summarization of three factors related to frequency: content word frequency, composition functions for estimating sentence importance from word frequency, and adjustment of frequency weights based on context. We carry out our analysis using datasets from the Document Understanding Conferences, studying not only the impact of these features on automatic summarizers, but also their role in human summarization. Our research shows that a frequency based summarizer can achieve performance comparable to that of state-of-the-art systems, but only with a good composition function; context sensitivity improves performance and significantly reduces repetition.

[1]  Hans Peter Luhn,et al.  The Automatic Creation of Literature Abstracts , 1958, IBM J. Res. Dev..

[2]  Gustave J. Rath,et al.  The formation of abstracts by the selection of sentences , 1961 .

[3]  Francine Chen,et al.  A trainable document summarizer , 1995, SIGIR '95.

[4]  Jade Goldstein-Stewart,et al.  The use of MMR, diversity-based reranking for reordering documents and producing summaries , 1998, SIGIR '98.

[5]  Jade Goldstein-Stewart,et al.  Creating and evaluating multi-document sentence extract summaries , 2000, CIKM '00.

[6]  Kathleen McKeown,et al.  Cut and Paste Based Text Summarization , 2000, ANLP.

[7]  N. Schenker,et al.  On Judging the Significance of Differences by Examining the Overlap Between Confidence Intervals , 2001 .

[8]  Eduard Hovy,et al.  Automated multi-document summarization in NeATS , 2002 .

[9]  Actress Elizabeth Taylor,et al.  Experiments in Multidocument Summarization , 2002 .

[10]  Eduard Hovy,et al.  Manual and automatic evaluation of summaries , 2002, ACL 2002.

[11]  Daniel Marcu,et al.  Summarization beyond sentence extraction: A probabilistic approach to sentence compression , 2002, Artif. Intell..

[12]  Simone Teufel,et al.  Examining the consensus between human summaries: initial experiments with factoid analysis , 2003, HLT-NAACL 2003.

[13]  Paul Over,et al.  Intrinsic Evaluation of Generic News Text Summarization Systems , 2003 .

[14]  Wai Lam,et al.  Evaluation Challenges in Large-Scale Document Summarization , 2003, ACL.

[15]  Eduard H. Hovy,et al.  Automatic Evaluation of Summaries Using N-gram Co-occurrence Statistics , 2003, NAACL.

[16]  Michele Banko,et al.  Using N-Grams To Understand the Nature of Summaries , 2004, HLT-NAACL.

[17]  Michele Banko,et al.  Event-Centric Summary Generation , 2004 .

[18]  Chin-Yew Lin,et al.  ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.

[19]  John M. Conroy Left-Brain/Right-Brain Multi-Document Summarization , 2004 .

[20]  Eamonn Newman,et al.  Comparing Redundancy Removal Techniques for Multi–Document Summarisation , 2004 .

[21]  Ani Nenkova,et al.  Evaluating Content Selection in Summarization: The Pyramid Method , 2004, NAACL.

[22]  Terry Copeck,et al.  Vocabulary Agreement Among Model Summaries And Source Documents 1 , 2004 .

[23]  Chin-Yew Lin,et al.  Automatic Evaluation of Machine Translation Quality Using Longest Common Subsequence and Skip-Bigram Statistics , 2004, ACL.

[24]  Daniel Marcu,et al.  Bayesian Multi-Document Summarization at MSE , 2005 .

[25]  Regina Barzilay,et al.  Sentence Fusion for Multidocument News Summarization , 2005, CL.

[26]  Kathleen R. McKeown,et al.  Identifying similarities and differences across English and Arabic news , 2005 .