暂无分享,去创建一个
Giuseppe Carenini | Hyeju Jang | Lanjun Wang | Raymond Li | Wen Xiao | G. Carenini | Hyeju Jang | Lanjun Wang | Raymond Li | Wen Xiao
[1] Alexander Löser,et al. VisBERT: Hidden-State Visualizations for Transformers , 2020, WWW.
[2] Minsuk Kahng,et al. Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers , 2018, IEEE Transactions on Visualization and Computer Graphics.
[3] Yuval Pinter,et al. Attention is not not Explanation , 2019, EMNLP.
[4] Phil Blunsom,et al. Teaching Machines to Read and Comprehend , 2015, NIPS.
[5] Mirella Lapata,et al. Text Summarization with Pretrained Encoders , 2019, EMNLP.
[6] Omer Levy,et al. What Does BERT Look at? An Analysis of BERT’s Attention , 2019, BlackboxNLP@ACL.
[7] Moshe Wasserblat,et al. InterpreT: An Interactive Visualization Tool for Interpreting Transformers , 2021, EACL.
[8] Omer Levy,et al. SpanBERT: Improving Pre-training by Representing and Predicting Spans , 2019, TACL.
[9] Martin Wattenberg,et al. Embedding Projector: Interactive Visualization and Interpretation of Embeddings , 2016, ArXiv.
[10] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[11] Hao Li,et al. Visualizing the Loss Landscape of Neural Nets , 2017, NeurIPS.
[12] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[13] Furu Wei,et al. Visualizing and Understanding the Effectiveness of BERT , 2019, EMNLP.
[14] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[15] Minsuk Kahng,et al. ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models , 2017, IEEE Transactions on Visualization and Computer Graphics.
[16] Jesse Vig,et al. A Multiscale Visualization of Attention in the Transformer Model , 2019, ACL.
[17] Ke Xu,et al. Investigating Learning Dynamics of BERT Fine-Tuning , 2020, AACL.
[18] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[19] Xiang Zhang,et al. Character-level Convolutional Networks for Text Classification , 2015, NIPS.
[20] Yejin Choi,et al. Dataset Cartography: Mapping and Diagnosing Datasets with Training Dynamics , 2020, EMNLP.
[21] Yang Chen,et al. Interactive Correction of Mislabeled Training Data , 2019, 2019 IEEE Conference on Visual Analytics Science and Technology (VAST).
[22] Sebastian Gehrmann,et al. exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformers Models , 2019, ArXiv.
[23] Sameer Singh,et al. AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models , 2019, EMNLP.
[24] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[25] Daniel Jurafsky,et al. Understanding Neural Networks through Representation Erasure , 2016, ArXiv.
[26] Pavlo Molchanov,et al. Importance Estimation for Neural Network Pruning , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Heng Tao Shen,et al. Principal Component Analysis , 2009, Encyclopedia of Biometrics.
[28] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[29] Yoshua Bengio,et al. An Empirical Study of Example Forgetting during Deep Neural Network Learning , 2018, ICLR.
[30] Alexander Binder,et al. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.
[31] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[32] Giuseppe Carenini,et al. NJM-Vis: interpreting neural joint models in NLP , 2020, IUI.
[33] Byron C. Wallace,et al. Attention is not Explanation , 2019, NAACL.
[34] Anna Rumshisky,et al. Revealing the Dark Secrets of BERT , 2019, EMNLP.
[35] Lysandre Debut,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[36] Tamara Munzner,et al. A Nested Model for Visualization Design and Validation , 2009, IEEE Transactions on Visualization and Computer Graphics.
[37] Wojciech Samek,et al. Methods for interpreting and understanding deep neural networks , 2017, Digit. Signal Process..
[38] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .
[39] Tolga Bolukbasi,et al. The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models , 2020, EMNLP.
[40] Fedor Moiseev,et al. Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned , 2019, ACL.
[41] Jaewoo Kang,et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining , 2019, Bioinform..
[42] Leland McInnes,et al. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction , 2018, ArXiv.
[43] Elahe Rahimtoroghi,et al. What Happens To BERT Embeddings During Fine-tuning? , 2020, BLACKBOXNLP.
[44] HeerJeffrey,et al. D3 Data-Driven Documents , 2011 .