暂无分享,去创建一个
Xiang Ren | Jingbo Shang | Jiawei Han | Liyuan Liu | Jian Peng | Jian Peng | Jiawei Han | Xiang Ren | Liyuan Liu | Jingbo Shang
[1] Sabine Buchholz,et al. Introduction to the CoNLL-2000 Shared Task Chunking , 2000, CoNLL/LLL.
[2] Erik F. Tjong Kim Sang,et al. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition , 2003, CoNLL.
[3] Dan Roth,et al. Design Challenges and Misconceptions in Named Entity Recognition , 2009, CoNLL.
[4] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[5] Kien Ming Ng,et al. An algorithm for nonlinear optimization problems with binary variables , 2010, Comput. Optim. Appl..
[6] Thorsten Brants,et al. One billion word benchmark for measuring progress in statistical language modeling , 2013, INTERSPEECH.
[7] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[8] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[9] Quoc V. Le,et al. Semi-supervised Sequence Learning , 2015, NIPS.
[10] Sanja Fidler,et al. Skip-Thought Vectors , 2015, NIPS.
[11] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[12] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[13] Eric Nichols,et al. Named Entity Recognition with Bidirectional LSTM-CNNs , 2015, TACL.
[14] Guillaume Lample,et al. Neural Architectures for Named Entity Recognition , 2016, NAACL.
[15] Yonghui Wu,et al. Exploring the Limits of Language Modeling , 2016, ArXiv.
[16] Ran El-Yaniv,et al. Binarized Neural Networks , 2016, NIPS.
[17] Yiran Chen,et al. Learning Structured Sparsity in Deep Neural Networks , 2016, NIPS.
[18] Jian Sun,et al. Identity Mappings in Deep Residual Networks , 2016, ECCV.
[19] Hayaru Shouno,et al. Analysis of Dropout Learning Regarded as Ensemble Learning , 2016, ICANN.
[20] Eduard H. Hovy,et al. End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF , 2016, ACL.
[21] Deniz Yuret,et al. CharNER: Character-Level Named Entity Recognition , 2016, COLING.
[22] Pradeep Dubey,et al. Ternary Neural Networks with Fine-Grained Quantization , 2017, ArXiv.
[23] Chandra Bhagavatula,et al. Semi-supervised sequence tagging with bidirectional language models , 2017, ACL.
[24] Wesley De Neve,et al. Improving Language Modeling using Densely Connected Recurrent Neural Networks , 2017, Rep4NLP@ACL.
[25] Hakan Inan,et al. Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling , 2016, ICLR.
[26] Geoffrey E. Hinton,et al. Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer , 2017, ICLR.
[27] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[28] Ilya Sutskever,et al. Learning to Generate Reviews and Discovering Sentiment , 2017, ArXiv.
[29] Jürgen Schmidhuber,et al. Recurrent Highway Networks , 2016, ICML.
[30] Moustapha Cissé,et al. Efficient softmax approximation for GPUs , 2016, ICML.
[31] Marek Rei,et al. Semi-supervised Multitask Learning for Sequence Labeling , 2017, ACL.
[32] Ran El-Yaniv,et al. Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations , 2016, J. Mach. Learn. Res..
[33] Suyog Gupta,et al. To prune, or not to prune: exploring the efficacy of pruning for model compression , 2017, ICLR.
[34] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[35] Xiang Ren,et al. Empower Sequence Labeling with Task-Aware Neural Language Model , 2017, AAAI.
[36] Shuang Wu,et al. Slim Embedding Layers for Recurrent Neural Language Models , 2017, AAAI.
[37] Chris Dyer,et al. On the State of the Art of Evaluation in Neural Language Models , 2017, ICLR.