暂无分享,去创建一个
Anima Anandkumar | Yakov Kronrod | Zachary C. Lipton | Hyokun Yun | Yanyao Shen | Zachary Chase Lipton | Anima Anandkumar | Hyokun Yun | Yanyao Shen | Y. Kronrod
[1] Chicheng Zhang,et al. Revisiting Perceptron: Efficient and Label-Optimal Learning of Halfspaces , 2017, NIPS.
[2] Guillaume Lample,et al. Neural Architectures for Named Entity Recognition , 2016, NAACL.
[3] Rishabh K. Iyer,et al. Submodularity in Data Subset Selection and Active Learning , 2015, ICML.
[4] Jason Weston,et al. Natural Language Processing (Almost) from Scratch , 2011, J. Mach. Learn. Res..
[5] Hwee Tou Ng,et al. Towards Robust Linguistic Analysis using OntoNotes , 2013, CoNLL.
[6] Halil Kilicoglu,et al. Annotating Named Entities in Consumer Health Questions , 2016, LREC.
[7] Adam Tauman Kalai,et al. Analysis of Perceptron-Based Active Learning , 2009, COLT.
[8] Andrew McCallum,et al. Fast and Accurate Entity Recognition with Iterated Dilated Convolutions , 2017, EMNLP.
[9] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[10] Yoshua Bengio,et al. Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding , 2013, INTERSPEECH.
[11] John Langford,et al. Importance weighted active learning , 2008, ICML '09.
[12] Maria-Florina Balcan,et al. The Power of Localization for Efficiently Learning Linear Separators with Noise , 2013, J. ACM.
[13] Yoshua Bengio,et al. Convolutional networks for images, speech, and time series , 1998 .
[14] Zoubin Ghahramani,et al. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks , 2015, NIPS.
[15] Kalina Bontcheva,et al. Crowdsourcing Named Entity Recognition and Entity Linking Corpora , 2017 .
[16] Andreas Krause,et al. Streaming submodular maximization: massive data summarization on the fly , 2014, KDD.
[17] Ruslan Salakhutdinov,et al. Multi-Task Cross-Lingual Sequence Tagging from Scratch , 2016, ArXiv.
[18] Eric Nichols,et al. Named Entity Recognition with Bidirectional LSTM-CNNs , 2015, TACL.
[19] Avirup Sil,et al. Toward Mention Detection Robustness with Recurrent Neural Networks , 2016, ArXiv.
[20] Tara N. Sainath,et al. FUNDAMENTAL TECHNOLOGIES IN MODERN SPEECH RECOGNITION Digital Object Identifier 10.1109/MSP.2012.2205597 , 2012 .
[21] Michael A. Arbib,et al. The handbook of brain theory and neural networks , 1995, A Bradford book.
[22] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Ye Zhang,et al. Active Discriminative Text Representation Learning , 2016, AAAI.
[24] Christopher D. Manning. Computational Linguistics and Deep Learning , 2015, Computational Linguistics.
[25] Wang Ling,et al. Two/Too Simple Adaptations of Word2Vec for Syntax Problems , 2015, NAACL.
[26] Ruimao Zhang,et al. Cost-Effective Active Learning for Deep Image Classification , 2017, IEEE Transactions on Circuits and Systems for Video Technology.
[27] Zoubin Ghahramani,et al. Deep Bayesian Active Learning with Image Data , 2017, ICML.
[28] Bowen Zhou,et al. Neural Models for Sequence Chunking , 2017, AAAI.
[29] Daphne Koller,et al. Support Vector Machine Active Learning with Applications to Text Classification , 2000, J. Mach. Learn. Res..
[30] Yoon Kim,et al. Convolutional Neural Networks for Sentence Classification , 2014, EMNLP.
[31] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[32] Andreas Krause,et al. Submodular Function Maximization , 2014, Tractability.
[33] Chicheng Zhang,et al. Revisiting Perceptron: Efficient and Label-Optimal Active Learning of Halfspaces , 2017, ArXiv.
[34] Alex Kendall,et al. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? , 2017, NIPS.
[35] Andrew McCallum,et al. Reducing Labeling Effort for Structured Prediction Tasks , 2005, AAAI.
[36] Jian Su,et al. Multi-Criteria-based Active Learning for Named Entity Recognition , 2004, ACL.
[37] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[38] John Langford,et al. Agnostic active learning , 2006, J. Comput. Syst. Sci..
[39] Pushmeet Kohli,et al. Tractability: Practical Approaches to Hard Problems , 2013 .
[40] Burr Settles,et al. Active Learning Literature Survey , 2009 .
[41] Erik F. Tjong Kim Sang,et al. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition , 2003, CoNLL.
[42] Fredrik Olsson,et al. A literature survey of active machine learning in the context of natural language processing , 2009 .
[43] M. L. Fisher,et al. An analysis of approximations for maximizing submodular set functions—I , 1978, Math. Program..
[44] Mark Craven,et al. An Analysis of Active Learning Strategies for Sequence Labeling Tasks , 2008, EMNLP.
[45] Iryna Gurevych,et al. Reporting Score Distributions Makes a Difference: Performance Study of LSTM-networks for Sequence Tagging , 2017, EMNLP.
[46] Neil D. Lawrence,et al. Deep Gaussian Processes , 2012, AISTATS.
[47] Slav Petrov,et al. Globally Normalized Transition-Based Neural Networks , 2016, ACL.
[48] Wei Xu,et al. Bidirectional LSTM-CRF Models for Sequence Tagging , 2015, ArXiv.
[49] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[50] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[51] Andreas Krause,et al. Cost-effective outbreak detection in networks , 2007, KDD '07.
[52] William A. Gale,et al. A sequential algorithm for training text classifiers , 1994, SIGIR '94.
[53] Andrew McCallum,et al. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data , 2001, ICML.