暂无分享,去创建一个
Manaal Faruqui | Shyam Upadhyay | Gaurav Singh Tomar | Shikhar Vashishth | Manaal Faruqui | Shyam Upadhyay | Shikhar Vashishth
[1] Li Zhao,et al. Attention-based LSTM for Aspect-level Sentiment Classification , 2016, EMNLP.
[2] Yang Liu,et al. Learning Structured Text Representations , 2017, TACL.
[3] Samuel R. Bowman,et al. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference , 2017, NAACL.
[4] Jason Weston,et al. Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks , 2015, ICLR.
[5] Alun D. Preece,et al. Interpretability of deep learning models: A survey of results , 2017, 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI).
[6] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[7] Koray Kavukcuoglu,et al. Multiple Object Recognition with Visual Attention , 2014, ICLR.
[8] Diego Marcheggiani,et al. Encoding Sentences with Graph Convolutional Networks for Semantic Role Labeling , 2017, EMNLP.
[9] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[10] Yuval Pinter,et al. Attention is not not Explanation , 2019, EMNLP.
[11] Noah A. Smith,et al. Is Attention Interpretable? , 2019, ACL.
[12] Yann Dauphin,et al. Language Modeling with Gated Convolutional Networks , 2016, ICML.
[13] Yoshua Bengio,et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention , 2015, ICML.
[14] Diyi Yang,et al. Hierarchical Attention Networks for Document Classification , 2016, NAACL.
[15] Yonatan Belinkov,et al. Analyzing the Structure of Attention in a Transformer Language Model , 2019, BlackboxNLP@ACL.
[16] Phil Blunsom,et al. Reasoning about Entailment with Neural Attention , 2015, ICLR.
[17] Xiaoli Z. Fern,et al. Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference , 2018, EMNLP.
[18] Byron C. Wallace,et al. Attention is not Explanation , 2019, NAACL.
[19] J. R. Landis,et al. The measurement of observer agreement for categorical data. , 1977, Biometrics.
[20] Khalil Sima'an,et al. Multi30K: Multilingual English-German Image Descriptions , 2016, VL@ACL.
[21] Christopher Potts,et al. Learning Word Vectors for Sentiment Analysis , 2011, ACL.
[22] Alex Graves,et al. Conditional Image Generation with PixelCNN Decoders , 2016, NIPS.
[23] Bowen Zhou,et al. A Structured Self-attentive Sentence Embedding , 2017, ICLR.
[24] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[25] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[26] Jacob Cohen. A Coefficient of Agreement for Nominal Scales , 1960 .
[27] Christopher D. Manning,et al. Effective Approaches to Attention-based Neural Machine Translation , 2015, EMNLP.
[28] Phil Blunsom,et al. Teaching Machines to Read and Comprehend , 2015, NIPS.
[29] Dipanjan Das,et al. BERT Rediscovers the Classical NLP Pipeline , 2019, ACL.
[30] Omer Levy,et al. What Does BERT Look at? An Analysis of BERT’s Attention , 2019, BlackboxNLP@ACL.
[31] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[32] Christopher Potts,et al. A large annotated corpus for learning natural language inference , 2015, EMNLP.