On the Sentence Embeddings from BERT for Semantic Textual Similarity
暂无分享,去创建一个
Yiming Yang | Hao Zhou | Bohan Li | Mingxuan Wang | Lei Li | Junxian He | Yiming Yang | Lei Li | Hao Zhou | Junxian He | Bohan Li | Mingxuan Wang
[1] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[2] Ruslan Salakhutdinov,et al. Breaking the Softmax Bottleneck: A High-Rank RNN Language Model , 2017, ICLR.
[3] Eneko Agirre,et al. SemEval-2016 Task 1: Semantic Textual Similarity, Monolingual and Cross-Lingual Evaluation , 2016, *SEMEVAL.
[4] Fabio Viola,et al. Taming VAEs , 2018, ArXiv.
[5] Samuel R. Bowman,et al. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference , 2017, NAACL.
[6] Yoshua Bengio,et al. NICE: Non-linear Independent Components Estimation , 2014, ICLR.
[7] Bernhard Schölkopf,et al. From Variational to Deterministic Autoencoders , 2019, ICLR.
[8] Eneko Agirre,et al. SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation , 2017, *SEMEVAL.
[9] Douwe Kiela,et al. SentEval: An Evaluation Toolkit for Universal Sentence Representations , 2018, LREC.
[10] Pramod Viswanath,et al. All-but-the-Top: Simple and Effective Postprocessing for Word Representations , 2017, ICLR.
[11] Graeme Hirst,et al. Towards Understanding Linear Word Analogies , 2018, ACL.
[12] Eneko Agirre,et al. *SEM 2013 shared task: Semantic Textual Similarity , 2013, *SEMEVAL.
[13] Nan Hua,et al. Universal Sentence Encoder , 2018, ArXiv.
[14] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[15] Yiming Yang,et al. A Surprisingly Effective Fix for Deep Latent Variable Modeling of Text , 2019, EMNLP.
[16] Jing Huang,et al. Improving Neural Language Generation with Spectrum Control , 2020, ICLR.
[17] Claire Cardie,et al. SemEval-2015 Task 2: Semantic Textual Similarity, English, Spanish and Pilot on Interpretability , 2015, *SEMEVAL.
[18] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[19] Ivan Kobyzev,et al. Normalizing Flows: Introduction and Ideas , 2019, ArXiv.
[20] Kawin Ethayarajh,et al. How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings , 2019, EMNLP.
[21] Omer Levy,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[22] Sanjeev Arora,et al. A Simple but Tough-to-Beat Baseline for Sentence Embeddings , 2017, ICLR.
[23] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[24] Ivan Kobyzev,et al. Normalizing Flows: An Introduction and Review of Current Methods , 2020, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[25] Eneko Agirre,et al. SemEval-2012 Task 6: A Pilot on Semantic Textual Similarity , 2012, *SEMEVAL.
[26] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[27] Christopher Potts,et al. A large annotated corpus for learning natural language inference , 2015, EMNLP.
[28] Omer Levy,et al. Neural Word Embedding as Implicit Matrix Factorization , 2014, NIPS.
[29] Holger Schwenk,et al. Supervised Learning of Universal Sentence Representations from Natural Language Inference Data , 2017, EMNLP.
[30] Di He,et al. Representation Degeneration Problem in Training Natural Language Generation Models , 2019, ICLR.
[31] Marco Marelli,et al. A SICK cure for the evaluation of compositional distributional semantic models , 2014, LREC.
[32] Prafulla Dhariwal,et al. Glow: Generative Flow with Invertible 1x1 Convolutions , 2018, NeurIPS.
[33] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[34] Claire Cardie,et al. SemEval-2014 Task 10: Multilingual Semantic Textual Similarity , 2014, *SEMEVAL.
[35] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .