Human-Level Interpretable Learning for Aspect-Based Sentiment Analysis

This paper proposes a human-interpretable learning approach for aspect-based sentiment analysis (ABSA), employing the recently introduced Tsetlin Machines (TMs). We attain interpretability by converting the intricate position-dependent textual semantics into binary form, mapping all the features into bag-of-words (BOWs). The binary-form BOWs are encoded so that the information on the aspect and context words are retained for sentiment classification. We further adopt the BOWs as input to the TM, enabling learning of aspect-based sentiment patterns in propositional logic. To evaluate interpretability and accuracy, we conducted experiments on two widely used ABSA datasets from SemEval 2014: Restaurant 14 and Laptop 14. The experiments show how each relevant feature takes part in conjunctive clauses that contain the context information for the corresponding aspect word, demonstrating human-level interpretability. At the same time, the obtained accuracy is on par with existing neural network models, reaching 78.02% on Restaurant 14 and 73.51% on Laptop 14.

[1]  Hinrich Schütze,et al.  Book Reviews: Foundations of Statistical Natural Language Processing , 1999, CL.

[2]  Abeer Alsadoon,et al.  Deep Learning for Aspect-Based Sentiment Analysis: A Comparative Review , 2019, Expert Syst. Appl..

[3]  Wojciech Samek,et al.  Explainable AI: Interpreting, Explaining and Visualizing Deep Learning , 2019, Explainable AI.

[4]  Hwee Tou Ng,et al.  Exploiting Document Knowledge for Aspect-level Sentiment Classification , 2018, ACL.

[5]  Tao Shen,et al.  DiSAN: Directional Self-Attention Network for RNN/CNN-free Language Understanding , 2017, AAAI.

[6]  Luke S. Zettlemoyer,et al.  Deep Contextualized Word Representations , 2018, NAACL.

[7]  Ole-Christoffer Granmo,et al.  The Convolutional Tsetlin Machine , 2019, ArXiv.

[8]  Ole-Christoffer Granmo,et al.  The Tsetlin Machine - A Game Theoretic Bandit Driven Approach to Optimal Pattern Recognition with Propositional Logic , 2018, ArXiv.

[9]  Lidong Bing,et al.  Recurrent Attention Network on Memory for Aspect Sentiment Analysis , 2017, EMNLP.

[10]  Richong Zhang,et al.  Replicate, Walk, and Stop on Syntax: An Effective Neural Network Model for Aspect-Level Sentiment Classification , 2020, AAAI.

[11]  Manaal Faruqui,et al.  Attention Interpretability Across NLP Tasks , 2019, ArXiv.

[12]  Li Zhao,et al.  Attention-based LSTM for Aspect-level Sentiment Classification , 2016, EMNLP.

[13]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.

[14]  Yujiu Yang,et al.  A Human-Like Semantic Cognition Network for Aspect-Level Sentiment Classification , 2019, AAAI.

[15]  E. Bonifacio,et al.  A Comparison of Rule-based Analysis with Regression Methods in Understanding the Risk Factors for Study Withdrawal in a Pediatric Study , 2016, Scientific Reports.

[16]  Houfeng Wang,et al.  Interactive Attention Networks for Aspect-Level Sentiment Classification , 2017, IJCAI.

[17]  Kuruge Darshana Abeyrathna,et al.  Extending the Tsetlin Machine With Integer-Weighted Clauses for Increased Interpretability , 2020, IEEE Access.

[18]  Xin Li,et al.  Transformation Networks for Target-Oriented Sentiment Classification , 2018, ACL.

[19]  Yue Zhang,et al.  Attention Modeling for Targeted Sentiment , 2017, EACL.

[20]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[21]  Ole-Christoffer Granmo,et al.  On the Convergence of Tsetlin Machines for the IDENTITY- and NOT Operators , 2021, IEEE transactions on pattern analysis and machine intelligence.

[22]  Saif Mohammad,et al.  NRC-Canada-2014: Detecting Aspects and Sentiment in Customer Reviews , 2014, *SEMEVAL.

[23]  Yuval Pinter,et al.  Attention is not not Explanation , 2019, EMNLP.

[24]  Yang Liu,et al.  On Identifiability in Transformers , 2020, ICLR.

[25]  Ting Liu,et al.  Aspect Level Sentiment Classification with Deep Memory Network , 2016, EMNLP.

[26]  Tiejun Zhao,et al.  Target-dependent Twitter Sentiment Classification , 2011, ACL.

[27]  Qinmin Hu,et al.  Position-aware hierarchical transfer model for aspect-level sentiment classification , 2020, Inf. Sci..

[28]  Xiaocheng Feng,et al.  Effective LSTMs for Target-Dependent Sentiment Classification , 2015, COLING.

[29]  Cynthia Rudin,et al.  Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.

[30]  Andrea Esuli,et al.  SENTIWORDNET: A Publicly Available Lexical Resource for Opinion Mining , 2006, LREC.

[31]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[32]  Jeffrey Dean,et al.  Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.

[33]  Jie Zhou,et al.  A Novel Aspect-Guided Deep Transition Model for Aspect Based Sentiment Analysis , 2019, EMNLP.

[34]  Noah A. Smith,et al.  Is Attention Interpretable? , 2019, ACL.

[35]  Bing Liu,et al.  Mining and summarizing customer reviews , 2004, KDD.

[36]  Ole-Christoffer Granmo,et al.  Using the Tsetlin Machine to Learn Human-Interpretable Rules for High-Accuracy Text Categorization With Medical Applications , 2018, IEEE Access.