High Quality ELMo Embeddings for Seven Less-Resourced Languages
暂无分享,去创建一个
[1] Daniel Zeman,et al. CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings , 2017 .
[2] Yijia Liu,et al. Towards Better UD Parsing: Deep Contextualized Word Embeddings, Ensemble, and Treebank Concatenation , 2018, CoNLL.
[3] Adam Kilgarriff,et al. The TenTen Corpus Family , 2013 .
[4] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[5] Murhaf Fares,et al. Word vectors, reuse, and replicability: Towards a community repository of large-text resources , 2017, NODALIDA.
[6] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[7] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[8] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[9] Tomas Mikolov,et al. Enriching Word Vectors with Subword Information , 2016, TACL.
[10] Guillaume Lample,et al. Word Translation Without Parallel Data , 2017, ICLR.
[11] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[12] I. Lučića. {bs,hr,sr}WaC – Web corpora of Bosnian, Croatian and Serbian , 2014 .
[13] Quoc V. Le,et al. Exploiting Similarities among Languages for Machine Translation , 2013, ArXiv.
[14] Jeffrey Dean,et al. Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.