YNU-HPCC at SemEval-2020 Task 10: Using a Multi-granularity Ordinal Classification of the BiLSTM Model for Emphasis Selection
暂无分享,去创建一个
[1] Hinrich Schütze,et al. Active Learning with Amazon Mechanical Turk , 2011, EMNLP.
[2] Xing Wang,et al. Multi-Granularity Self-Attention for Neural Machine Translation , 2019, EMNLP.
[3] Danushka Bollegala,et al. An Empirical Study on Fine-Grained Named Entity Recognition , 2018, COLING.
[4] Andrew McCallum,et al. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data , 2001, ICML.
[5] Sebastian Ruder,et al. An overview of gradient descent optimization algorithms , 2016, Vestnik komp'iuternykh i informatsionnykh tekhnologii.
[6] Bhuvana Ramabhadran,et al. Modeling phrasing and prominence using deep recurrent learning , 2015, INTERSPEECH.
[7] Wei Xu,et al. Bidirectional LSTM-CRF Models for Sequence Tagging , 2015, ArXiv.
[8] Xiu-Shen Wei,et al. Deep Learning for Fine-Grained Image Analysis: A Survey , 2019, ArXiv.
[9] Simon King,et al. Modelling prominence and emphasis improves unit-selection synthesis , 2007, INTERSPEECH.
[10] Seung Woo Lee,et al. Birdsnap: Large-Scale Fine-Grained Visual Categorization of Birds , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[11] David Konopnicki,et al. Word Emphasis Prediction for Expressive Text to Speech , 2018, INTERSPEECH.
[12] Jun Zhao,et al. How to Generate a Good Word Embedding , 2015, IEEE Intelligent Systems.
[13] Maxine Eskénazi,et al. Multi-Granularity Representations of Dialog , 2019, EMNLP.
[14] Maria Wolters,et al. Prediction of word prominence , 1997, EUROSPEECH.
[15] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[16] Franck Dernoncourt,et al. Learning Emphasis Selection for Written Text in Visual Media from Crowd-Sourced Label Distributions , 2019, ACL.
[17] Franck Dernoncourt,et al. SemEval-2020 Task 10: Emphasis Selection for Written Text in Visual Media , 2020, SEMEVAL.