Novel Word Embedding and Translation-based Language Modeling for Extractive Speech Summarization

Word embedding methods revolve around learning continuous distributed vector representations of words with neural networks, which can capture semantic and/or syntactic cues, and in turn be used to induce similarity measures among words, sentences and documents in context. Celebrated methods can be categorized as prediction-based and count-based methods according to the training objectives and model architectures. Their pros and cons have been extensively analyzed and evaluated in recent studies, but there is relatively less work continuing the line of research to develop an enhanced learning method that brings together the advantages of the two model families. In addition, the interpretation of the learned word representations still remains somewhat opaque. Motivated by the observations and considering the pressing need, this paper presents a novel method for learning the word representations, which not only inherits the advantages of classic word embedding methods but also offers a clearer and more rigorous interpretation of the learned word representations. Built upon the proposed word embedding method, we further formulate a translation-based language modeling framework for the extractive speech summarization task. A series of empirical evaluations demonstrate the effectiveness of the proposed word representation learning and language modeling techniques in extractive speech summarization.

[1]  Phyllis B. Baxendale,et al.  Machine-Made Index for Technical Literature - An Experiment , 1958, IBM J. Res. Dev..

[2]  Mark T. Maybury,et al.  Advances in Automatic Text Summarization , 1999 .

[3]  Yoshua Bengio,et al.  A Neural Probabilistic Language Model , 2003, J. Mach. Learn. Res..

[4]  Xin Liu,et al.  Generic text summarization using relevance measure and latent semantic analysis , 2001, SIGIR '01.

[5]  M. Maybury,et al.  Automatic Summarization , 2002, Computational Linguistics.

[6]  Dragomir R. Radev,et al.  LexRank: Graph-based Lexical Centrality as Salience in Text Summarization , 2004, J. Artif. Intell. Res..

[7]  Elizabeth D. Liddy,et al.  Advances in Automatic Text Summarization , 2001, Information Retrieval.

[8]  Chun-Nan Hsu,et al.  Triple jump acceleration for the EM algorithm , 2005, Fifth IEEE International Conference on Data Mining (ICDM'05).

[9]  Hsin-Min Wang,et al.  MATBN: A Mandarin Chinese Broadcast News Corpus , 2005, Int. J. Comput. Linguistics Chin. Lang. Process..

[10]  Chung-Hsien Wu,et al.  Spoken Document Retrieval Using Multilevel Knowledge and Semantic Verification , 2007, IEEE Transactions on Audio, Speech, and Language Processing.

[11]  Mari Ostendorf Speech technology and information access [In the Spotlight] , 2008 .

[12]  Jason Weston,et al.  A unified architecture for natural language processing: deep neural networks with multitask learning , 2008, ICML '08.

[13]  Gerald Penn,et al.  A Critical Reassessment of Evaluation Baselines for Speech Summarization , 2008, ACL.

[14]  Xiaojun Wan,et al.  Multi-document summarization using cluster-based link analysis , 2008, SIGIR '08.

[15]  Mari Ostendorf,et al.  Speech Technology and Information Access , 2008 .

[16]  Hui Lin,et al.  Multi-document Summarization via Budgeted Maximization of Submodular Functions , 2010, NAACL.

[17]  Dilek Z. Hakkani-Tür,et al.  Long story short - Global unsupervised models for keyphrase based meeting summarization , 2010, Speech Commun..

[18]  Yang Liu,et al.  13. Speech Summarization , 2011 .

[19]  Keiichi Tokuda,et al.  Fundamental Technologies in Modern Speech Recognition , 2012 .

[20]  Keiichi Tokuda,et al.  Fundamental Technologies in Modern Speech Recognition [From the Guest Editors] , 2012 .

[21]  Christopher D. Manning,et al.  Bilingual Word Embeddings for Phrase-Based Machine Translation , 2013, EMNLP.

[22]  Jeffrey Dean,et al.  Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.

[23]  Hsin-Hsi Chen,et al.  Weighted matrix factorization for spoken document retrieval , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[24]  Georgiana Dinu,et al.  Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors , 2014, ACL.

[25]  Devdatt P. Dubhashi,et al.  Extractive Summarization using Continuous Vector Space Models , 2014, CVSC@EACL.

[26]  Ming Zhou,et al.  Learning Sentiment-Specific Word Embedding for Twitter Sentiment Classification , 2014, ACL.

[27]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.

[28]  Omer Levy,et al.  Neural Word Embedding as Implicit Matrix Factorization , 2014, NIPS.

[29]  Houfeng Wang,et al.  Learning Summary Prior Representation for Extractive Summarization , 2015, ACL.

[30]  Hsin-Hsi Chen,et al.  Extractive Broadcast News Summarization Leveraging Recurrent Neural Network Language Modeling Techniques , 2015, IEEE/ACM Transactions on Audio, Speech, and Language Processing.

[31]  Wen-Lian Hsu,et al.  Combining Relevance Language Modeling and Clarity Measure for Extractive Speech Summarization , 2015, IEEE/ACM Transactions on Audio, Speech, and Language Processing.

[32]  Omer Levy,et al.  Improving Distributional Similarity with Lessons Learned from Word Embeddings , 2015, TACL.

[33]  Hsin-Hsi Chen,et al.  Leveraging word embeddings for spoken document summarization , 2015, INTERSPEECH.

[34]  Jen-Tzung Chien,et al.  Hierarchical Pitman–Yor–Dirichlet Language Model , 2015, IEEE/ACM Transactions on Audio, Speech, and Language Processing.

[35]  Jade Goldstein-Stewart,et al.  The Use of MMR, Diversity-Based Reranking for Reordering Documents and Producing Summaries , 1998, SIGIR Forum.