暂无分享,去创建一个
[1] Saif Mohammad,et al. WASSA-2017 Shared Task on Emotion Intensity , 2017, WASSA@EMNLP.
[2] Jürgen Schmidhuber,et al. Framewise phoneme classification with bidirectional LSTM and other neural network architectures , 2005, Neural Networks.
[3] Holger Schwenk,et al. Supervised Learning of Universal Sentence Representations from Natural Language Inference Data , 2017, EMNLP.
[4] Thorsten Brants,et al. One billion word benchmark for measuring progress in statistical language modeling , 2013, INTERSPEECH.
[5] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[6] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[7] Sebastian Ruder,et al. Universal Language Model Fine-tuning for Text Classification , 2018, ACL.
[8] Saif Mohammad,et al. IEST: WASSA-2018 Implicit Emotions Shared Task , 2018, WASSA@EMNLP.
[9] Geoffrey E. Hinton,et al. Speech recognition with deep recurrent neural networks , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
[10] Guillaume Lample,et al. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties , 2018, ACL.
[11] Hamed R. Bonab,et al. A Theoretical Framework on the Ideal Number of Classifiers for Online Ensembles in Data Streams , 2016, CIKM.
[12] Luca Antiga,et al. Automatic differentiation in PyTorch , 2017 .
[13] Brendan T. O'Connor,et al. Part-of-Speech Tagging for Twitter: Annotation, Features, and Experiments , 2010, ACL.
[14] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.