InterpreT: An Interactive Visualization Tool for Interpreting Transformers
暂无分享,去创建一个
Moshe Wasserblat | Oren Pereg | Phillip Howard | Estelle Aflalo | Daniel Korat | Vasudev Lal | Arden Ma | Ana Simoes | Gadi Singer | Vasudev Lal | Estelle Aflalo | Ana Simões | Phillip Howard | Gadi Singer | Daniel Korat | Moshe Wasserblat | Oren Pereg | Arden Ma
[1] Xiaokui Xiao,et al. Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis , 2016, EMNLP.
[2] Jesse Vig,et al. Visualizing Attention in Transformer-Based Language Representation Models , 2019, ArXiv.
[3] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[4] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[5] Omer Levy,et al. What Does BERT Look at? An Analysis of BERT’s Attention , 2019, BlackboxNLP@ACL.
[6] Omer Levy,et al. SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems , 2019, NeurIPS.
[7] Haris Papageorgiou,et al. SemEval-2016 Task 5: Aspect Based Sentiment Analysis , 2016, *SEMEVAL.
[8] Alexander Löser,et al. VisBERT: Hidden-State Visualizations for Transformers , 2020, WWW.
[9] Jianfei Yu,et al. Recurrent Neural Networks with Auxiliary Labels for Cross-Domain Opinion Target Extraction , 2017, AAAI.
[10] Sinno Jialin Pan,et al. Recursive Neural Structural Correspondence Network for Cross-domain Aspect and Opinion Co-Extraction , 2018, ACL.
[11] Tolga Bolukbasi,et al. The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models , 2020, EMNLP.
[12] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[13] Suresh Manandhar,et al. SemEval-2014 Task 4: Aspect Based Sentiment Analysis , 2014, *SEMEVAL.
[14] Benoît Sagot,et al. What Does BERT Learn about the Structure of Language? , 2019, ACL.
[15] Moshe Wasserblat,et al. Syntactically Aware Cross-Domain Aspect and Opinion Terms Extraction , 2020, COLING.
[16] Sebastian Gehrmann,et al. exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformers Models , 2019, ArXiv.
[17] Sameer Singh,et al. AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models , 2019, EMNLP.
[18] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[19] Martin Wattenberg,et al. Visualizing and Measuring the Geometry of BERT , 2019, NeurIPS.
[20] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .