暂无分享,去创建一个
Kentaro Inui | Sho Yokoi | Jun Suzuki | Sosuke Kobayashi | Kentaro Inui | Jun Suzuki | Sho Yokoi | Sosuke Kobayashi
[1] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[2] Takanori Maehara,et al. Data Cleansing for Models Trained with SGD , 2019, NeurIPS.
[3] Ali Farhadi,et al. Supermasks in Superposition , 2020, NeurIPS.
[4] Sosuke Kobayashi,et al. All Word Embeddings from One Embedding , 2020, NeurIPS.
[5] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[6] Sameer Singh,et al. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier , 2016, NAACL.
[7] Nitish Srivastava,et al. Improving neural networks by preventing co-adaptation of feature detectors , 2012, ArXiv.
[8] Richard Socher,et al. Pointer Sentinel Mixture Models , 2016, ICLR.
[9] Mike Schuster,et al. Japanese and Korean voice search , 2012, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[10] Mona Attariyan,et al. Parameter-Efficient Transfer Learning for NLP , 2019, ICML.
[11] R. Cook. Detection of influential observation in linear regression , 2000 .
[12] R'emi Louf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[13] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[14] George Kurian,et al. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.
[15] Samuel R. Bowman,et al. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference , 2017, NAACL.
[16] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[17] Yoshua Bengio,et al. A Neural Probabilistic Language Model , 2003, J. Mach. Learn. Res..
[18] Christopher D. Manning,et al. Stanza: A Python Natural Language Processing Toolkit for Many Human Languages , 2020, ACL.
[19] Xiang Zhang,et al. Character-level Convolutional Networks for Text Classification , 2015, NIPS.
[20] Yulia Tsvetkov,et al. Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions , 2020, ACL.
[21] Rico Sennrich,et al. Neural Machine Translation of Rare Words with Subword Units , 2015, ACL.
[22] John Blitzer,et al. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification , 2007, ACL.
[23] Lorenzo Porzi,et al. Dropout distillation , 2016, ICML.
[24] Stephen J. Wright,et al. Training Set Debugging Using Trusted Items , 2018, AAAI.
[25] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[26] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[27] Philip Bachman,et al. Learning with Pseudo-Ensembles , 2014, NIPS.
[28] Pierre Baldi,et al. The dropout learning algorithm , 2014, Artif. Intell..