Explaining Neural Network Predictions on Sentence Pairs via Learning Word-Group Masks

Explaining neural network models is important for increasing their trustworthiness in real-world applications. Most existing methods generate post-hoc explanations for neural network models by identifying individual feature attributions or detecting interactions between adjacent features. However, for models with text pairs as inputs (e.g., paraphrase identification), existing methods are not sufficient to capture feature interactions between two texts and their simple extension of computing all word-pair interactions between two texts is computationally inefficient. In this work, we propose the Group Mask (GMASK) method to implicitly detect word correlations by grouping correlated words from the input text pair together and measure their contribution to the corresponding NLP tasks as a whole. The proposed method is evaluated with two different model architectures (decomposable attention model and BERT) across four datasets, including natural language inference and paraphrase identification tasks. Experiments show the effectiveness of GMASK in providing faithful explanations to these models.

[1]  Omer Levy,et al.  GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.

[2]  Yangfeng Ji,et al.  Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection , 2020, ACL.

[3]  Byron C. Wallace,et al.  Attention is not Explanation , 2019, NAACL.

[4]  Scott M. Lundberg,et al.  Consistent Individualized Feature Attribution for Tree Ensembles , 2018, ArXiv.

[5]  Chandan Singh,et al.  Hierarchical interpretations for neural network predictions , 2018, ICLR.

[6]  Ankur Taly,et al.  Axiomatic Attribution for Deep Networks , 2017, ICML.

[7]  L. Shapley A Value for n-person Games , 1988 .

[8]  Thomas Lukasiewicz,et al.  e-SNLI: Natural Language Inference with Natural Language Explanations , 2018, NeurIPS.

[9]  Jakob Uszkoreit,et al.  A Decomposable Attention Model for Natural Language Inference , 2016, EMNLP.

[10]  Bin Yu,et al.  Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs , 2018, ICLR.

[11]  Naftali Tishby,et al.  The information bottleneck method , 2000, ArXiv.

[12]  Xue Feng,et al.  Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction Detection , 2020, ICLR.

[13]  Chris Brockett,et al.  Automatically Constructing a Corpus of Sentential Paraphrases , 2005, IJCNLP.

[14]  Phil Blunsom,et al.  Reasoning about Entailment with Neural Attention , 2015, ICLR.

[15]  Tomas Mikolov,et al.  Enriching Word Vectors with Subword Information , 2016, TACL.

[16]  Regina Barzilay,et al.  Rationalizing Neural Predictions , 2016, EMNLP.

[17]  Wenpeng Yin,et al.  Convolutional Neural Network for Paraphrase Identification , 2015, NAACL.

[18]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[19]  Yotam Hechtlinger,et al.  Interpretation of Prediction Models Using the Input Gradient , 2016, ArXiv.

[20]  Alexander Binder,et al.  Evaluating the Visualization of What a Deep Neural Network Has Learned , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[21]  Ivan Titov,et al.  Interpretable Neural Predictions with Differentiable Binary Variables , 2019, ACL.

[22]  Dong Nguyen,et al.  Comparing Automatic and Human Evaluation of Local Explanations for Text Classification , 2018, NAACL.

[23]  Xiaoli Z. Fern,et al.  Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference , 2018, EMNLP.

[24]  Scott Lundberg,et al.  A Unified Approach to Interpreting Model Predictions , 2017, NIPS.

[25]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[26]  Yuval Pinter,et al.  Attention is not not Explanation , 2019, EMNLP.

[27]  Franco Turini,et al.  A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..

[28]  Michael Tsang,et al.  Can I trust you more? Model-Agnostic Hierarchical Explanations , 2018, ArXiv.

[29]  Xiang Ren,et al.  Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models , 2020, ICLR.

[30]  Ben Poole,et al.  Categorical Reparameterization with Gumbel-Softmax , 2016, ICLR.

[31]  Andreas Vlachos,et al.  Generating Token-Level Explanations for Natural Language Inference , 2019, NAACL.

[32]  Christopher Potts,et al.  A large annotated corpus for learning natural language inference , 2015, EMNLP.

[33]  Nicola De Cao,et al.  How Do Decisions Emerge across Layers in Neural Models? Interpretation with Differentiable Masking , 2020, EMNLP.

[34]  Fan Yang,et al.  On Attribution of Recurrent Neural Network Predictions via Additive Decomposition , 2019, WWW.

[35]  Daniel Jurafsky,et al.  Understanding Neural Networks through Representation Erasure , 2016, ArXiv.

[36]  Yangfeng Ji,et al.  Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers , 2020, Conference on Empirical Methods in Natural Language Processing.

[37]  Le Song,et al.  Learning to Explain: An Information-Theoretic Perspective on Model Interpretation , 2018, ICML.

[38]  Zhiguo Wang,et al.  Bilateral Multi-Perspective Matching for Natural Language Sentences , 2017, IJCAI.

[39]  Yee Whye Teh,et al.  The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables , 2016, ICLR.