Ensembling Graph Predictions for AMR Parsing

In many machine learning tasks, models are trained to predict structure data such as graphs. For example, in natural language processing, it is very common to parse texts into dependency trees or abstract meaning representation (AMR) graphs. On the other hand, ensemble methods combine predictions from multiple models to create a new one that is more robust and accurate than individual predictions. In the literature, there are many ensembling techniques proposed for classification or regression problems, however, ensemble graph prediction has not been studied thoroughly. In this work, we formalize this problem as mining the largest graph that is the most supported by a collection of graph predictions. As the problem is NP-Hard, we propose an efficient heuristic algorithm to approximate the optimal solution. To validate our approach, we carried out experiments in AMR parsing problems. The experimental results demonstrate that the proposed approach can combine the strength of state-of-the-art AMR parsers to create new predictions that are more accurate than any individual models in five standard benchmark datasets1.

[1]  Johan Bos,et al.  Neural Semantic Parsing by Character-based Translation: Experiments with Abstract Meaning Representations , 2017, ArXiv.

[2]  Michele Bevilacqua,et al.  One SPRING to Rule Them Both: Symmetric AMR Semantic Parsing and Generation without a Complex Pipeline , 2021, AAAI.

[3]  Zhiwen Yu,et al.  A survey on ensemble learning , 2019, Frontiers of Computer Science.

[4]  Yejin Choi,et al.  Neural AMR: Sequence-to-Sequence Models for Parsing and Generation , 2017, ACL.

[5]  Giorgio Satta,et al.  An Incremental Parser for Abstract Meaning Representation , 2016, EACL.

[6]  Alon Lavie,et al.  Parser Combination by Reparsing , 2006, NAACL.

[7]  Colin Raffel,et al.  Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..

[8]  Kevin Knight,et al.  Smatch: an Evaluation Metric for Semantic Feature Structures , 2013, ACL.

[9]  Giorgio Valentini,et al.  Bias-Variance Analysis of Support Vector Machines for the Development of SVM-Based Ensemble Methods , 2004, J. Mach. Learn. Res..

[10]  Young-Suk Lee,et al.  Pushing the Limits of AMR Parsing with Self-Learning , 2020, EMNLP.

[11]  Heuiseok Lim,et al.  I Know What You Asked: Graph Path Learning using AMR for Commonsense Reasoning , 2020, COLING.

[12]  Alexander G. Gray,et al.  Question Answering over Knowledge Bases by Leveraging Semantic Parsing and Neuro-Symbolic Reasoning , 2020, ArXiv.

[13]  Noah A. Smith,et al.  Distilling an Ensemble of Greedy Dependency Parsers into One MST Parser , 2016, EMNLP.

[14]  Philipp Koehn,et al.  Abstract Meaning Representation for Sembanking , 2013, LAW@ACL.

[15]  Wen-tau Yih,et al.  Efficient One-Pass End-to-End Entity Linking for Questions , 2020, EMNLP.

[16]  Guntis Barzdins,et al.  RIGA at SemEval-2016 Task 8: Impact of Smatch Extensions and Character-Level Neural Translation on AMR Parsing Accuracy , 2016, *SEMEVAL.

[17]  Cid C. de Souza,et al.  The maximum common edge subgraph problem: A polyhedral investigation , 2012, Discret. Appl. Math..

[18]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[19]  Alex M. Andrew,et al.  Boosting: Foundations and Algorithms , 2012 .

[20]  Tianqi Chen,et al.  XGBoost: A Scalable Tree Boosting System , 2016, KDD.

[21]  Guodong Zhou,et al.  Improving AMR Parsing with Sequence-to-Sequence Pre-training , 2020, EMNLP.

[22]  Pedro M. Domingos A Unified Bias-Variance Decomposition , 2022 .

[23]  Wai Lam,et al.  AMR Parsing via Graph-Sequence Iterative Inference , 2020, ACL.

[24]  Jiawei Zhou,et al.  AMR Parsing with Action-Pointer Transformer , 2021, NAACL.