暂无分享,去创建一个
Neel Kant | Bryan Catanzaro | Raul Puri | Nikolai Yakovenko | Raul Puri | Bryan Catanzaro | Neel Kant | Nikolai Yakovenko
[1] R. Plutchik. Emotions : a general psychoevolutionary theory , 1984 .
[2] P. Ekman. An argument for basic emotions , 1992 .
[3] Ilya Sutskever,et al. SUBWORD LANGUAGE MODELING WITH NEURAL NETWORKS , 2011 .
[4] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[5] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[6] Sanja Fidler,et al. Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[7] Quoc V. Le,et al. Semi-supervised Sequence Learning , 2015, NIPS.
[8] Anton van den Hengel,et al. Image-Based Recommendations on Styles and Substitutes , 2015, SIGIR.
[9] Rico Sennrich,et al. Neural Machine Translation of Rare Words with Subword Units , 2015, ACL.
[10] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[11] Steve Renals,et al. Multiplicative LSTM for sequence modelling , 2016, ICLR.
[12] Ilya Sutskever,et al. Learning to Generate Reviews and Discovering Sentiment , 2017, ArXiv.
[13] Diederik P. Kingma,et al. GPU Kernels for Block-Sparse Weights , 2017 .
[14] Hardik Meisheri,et al. TCS Research at SemEval-2018 Task 1: Learning Robust Representations using Multi-Attention Architecture , 2018, *SEMEVAL.
[15] Shrikanth Narayanan,et al. NTUA-SLP at SemEval-2018 Task 1: Predicting Affective Content in Tweets with Deep Attentive RNNs and Transfer Learning , 2018, *SEMEVAL.
[16] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[17] Saif Mohammad,et al. SemEval-2018 Task 1: Affect in Tweets , 2018, *SEMEVAL.
[18] Omer Levy,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[19] Anima Anandkumar,et al. Learning From Noisy Singly-labeled Data , 2017, ICLR.
[20] Sebastian Ruder,et al. Fine-tuned Language Models for Text Classification , 2018, ArXiv.
[21] Bryan Catanzaro,et al. Large Scale Language Modeling: Converging on 40GB of Text in Four Hours , 2018, 2018 30th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD).
[22] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[23] Ruslan Salakhutdinov,et al. Breaking the Softmax Bottleneck: A High-Rank RNN Language Model , 2017, ICLR.
[24] Noah Constant,et al. Character-Level Language Modeling with Deeper Self-Attention , 2018, AAAI.
[25] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.