Temporal Reasoning on Implicit Events from Distant Supervision

We propose TRACIE, a novel temporal reasoning dataset that evaluates the degree to which systems understand implicit events—events that are not mentioned explicitly in natural language text but can be inferred from it. This introduces a new challenge in temporal reasoning research, where prior work has focused on explicitly mentioned events. Human readers can infer implicit events via commonsense reasoning, resulting in a more comprehensive understanding of the situation and, consequently, better reasoning about time. We find, however, that state-of-the-art models struggle when predicting temporal relationships between implicit and explicit events. To address this, we propose a neuro-symbolic temporal reasoning model, SymTime, which exploits distant supervision signals from large-scale text and uses temporal rules to combine start times and durations to infer end times. SymTime outperforms strong baseline systems on TRACIE by 5%, and by 11% in a zero prior knowledge training setting. Our approach also generalizes to other temporal reasoning tasks, as evidenced by a gain of 1%-9% on MATRES, an explicit event benchmark.

[1]  Shibamouli Lahiri,et al.  Complexity of Word Collocation Networks: A Preliminary Structural Analysis , 2013, EACL.

[2]  Samuel R. Bowman,et al.  A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference , 2017, NAACL.

[3]  Hao Wu,et al.  A Multi-Axis Annotation Scheme for Event Temporal Relations , 2018, ACL.

[4]  Bhavana Dalvi,et al.  A Dataset for Tracking Entities in Open Domain Procedural Text , 2020, EMNLP.

[5]  Martha Palmer,et al.  Richer Event Description: Integrating event coreference with temporal, causal and bridging annotation , 2016 .

[6]  Marie-Francine Moens,et al.  Temporal Information Extraction by Predicting Relative Time-lines , 2018, EMNLP.

[7]  Christopher Potts,et al.  A large annotated corpus for learning natural language inference , 2015, EMNLP.

[8]  Taylor Cassidy,et al.  An Annotation Framework for Dense Event Ordering , 2014, ACL.

[9]  Omer Levy,et al.  RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.

[10]  Marie-Francine Moens,et al.  Structured Learning for Temporal Relation Extraction from Clinical Records , 2017, EACL.

[11]  Noah A. Smith,et al.  Evaluating Models’ Local Decision Boundaries via Contrast Sets , 2020, FINDINGS.

[12]  James F. Allen Maintaining knowledge about temporal intervals , 1983, CACM.

[13]  Dan Roth,et al.  A Structured Learning Approach to Temporal Relation Extraction , 2017, EMNLP.

[14]  James H. Martin,et al.  Timelines from Text: Identification of Syntactic Temporal Relations , 2007, International Conference on Semantic Computing (ICSC 2007).

[15]  Nanyun Peng,et al.  Joint Event and Temporal Relation Extraction with Shared Representations and Structured Prediction , 2019, EMNLP.

[16]  Vivek Srikumar,et al.  A Logic-Driven Framework for Consistency of Neural Models , 2019, EMNLP.

[17]  Dan Roth,et al.  CogCompTime: A Tool for Understanding Time in Natural Language , 2018, EMNLP.

[18]  Nathanael Chambers,et al.  Using Query Patterns to Learn the Duration of Events , 2011, IWCS.

[19]  Thomas Wolf,et al.  HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.

[20]  James Pustejovsky,et al.  TimeML: Robust Specification of Event and Temporal Expressions in Text , 2003, New Directions in Question Answering.

[21]  Nathanael Chambers,et al.  A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories , 2016, ArXiv.

[22]  Colin Raffel,et al.  Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..

[23]  Anna Rumshisky,et al.  Context-Aware Neural Model for Temporal Information Extraction , 2018, ACL.

[24]  Mark O. Riedl,et al.  Automated Storytelling via Causal, Commonsense Plot Ordering , 2021, AAAI.

[25]  Dan Roth,et al.  “Going on a vacation” takes longer than “Going for a walk”: A Study of Temporal Commonsense Understanding , 2019, EMNLP.

[26]  Iryna Gurevych,et al.  Temporal Anchoring of Events for the TimeBank Corpus , 2016, ACL.

[27]  Jonathan Berant,et al.  The Web as a Knowledge-Base for Answering Complex Questions , 2018, NAACL.

[28]  Nanyun Peng,et al.  TORQUE: A Reading Comprehension Dataset of Temporal Ordering Questions , 2020, EMNLP.

[29]  Luke S. Zettlemoyer,et al.  AllenNLP: A Deep Semantic Natural Language Processing Platform , 2018, ArXiv.

[30]  Benjamin Van Durme,et al.  Temporal Reasoning in Natural Language Inference , 2020, FINDINGS.

[31]  Yonatan Bisk,et al.  Natural Language Inference from Multiple Premises , 2017, IJCNLP.

[32]  Daniel Deutch,et al.  Break It Down: A Question Understanding Benchmark , 2020, TACL.

[33]  Lenhart K. Schubert,et al.  Efficient Algorithms for Qualitative Reasoning about Time , 1995, Artif. Intell..

[34]  Benjamin Van Durme,et al.  Fine-Grained Temporal Relation Extraction , 2019, ACL.

[35]  Dan Roth,et al.  Joint Inference for Event Timeline Construction , 2012, EMNLP.

[36]  Dan Roth,et al.  Question Answering as Global Reasoning Over Semantic Abstractions , 2018, AAAI.

[37]  Dan Roth,et al.  Joint Constrained Learning for Event-Event Relation Extraction , 2020, EMNLP.

[38]  Doug Downey,et al.  Abductive Commonsense Reasoning , 2019, ICLR.

[39]  Hao Wu,et al.  Improving Temporal Relation Extraction with a Globally Acquired Statistical Resource , 2018, NAACL.

[40]  Bhavana Dalvi,et al.  Reasoning about Actions and State Changes by Injecting Commonsense Knowledge , 2018, EMNLP.

[41]  Hao Wu,et al.  Easy, Reproducible and Quality-Controlled Data Collection with CROWDAQ , 2020, EMNLP.

[42]  Dan Roth,et al.  Neural Module Networks for Reasoning over Text , 2020, ICLR.

[43]  Rachel Rudinger,et al.  Hypothesis Only Baselines in Natural Language Inference , 2018, *SEMEVAL.

[44]  J. Pearl Causal inference in statistics: An overview , 2009 .

[45]  Daniel Khashabi,et al.  Text Modular Networks: Learning to Decompose Tasks in the Language of Existing Models , 2021, NAACL.

[46]  Dan Roth,et al.  Temporal Common Sense Acquisition with Minimal Supervision , 2020, ACL.

[47]  Jerry R. Hobbs,et al.  Extending TimeML with Typical Durations of Events , 2006 .

[48]  Mohit Bansal,et al.  Adversarial NLI: A New Benchmark for Natural Language Understanding , 2020, ACL.

[49]  Dan Klein,et al.  Neural Module Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).