Temporal attention augmented transformer Hawkes process

In recent years, mining the knowledge from asynchronous sequences by Hawkes process is a subject worthy of continued attention, and Hawkes processes based on the neural network have gradually become the most hotly researched fields, especially based on the recurrence neural network (RNN). However, these models still contain some inherent shortcomings of RNN, such as vanishing and exploding gradient and long-term dependency problems. Meanwhile, Transformer based on self-attention has achieved great success in sequential modeling like text processing and speech recognition. Although the Transformer Hawkes process (THP) has gained huge performance improvement, THPs don’t effectively utilize the temporal information in the asynchronous events, for these asynchronous sequences, the event occurrence instants are as important as the types of events, while conventional THPs simply convert temporal information into position encoding and add them as the input of transformer. With this in mind, we come up with a new kind of Transformer-based Hawkes process model, Temporal Attention Augmented Transformer Hawkes Process (TAA-THP), we modify the traditional dot-product attention structure, and introduce the temporal encoding into attention structure. We conduct numerous experiments on a wide range of synthetic and real-life datasets to validate the performance of our proposed TAA-THP model, significantly improvement compared with existing baseline models on the different measurements is achieved, including log-likelihood on the test dataset, and prediction accuracies of event types and occurrence times. In addition, through the ablation studies, we vividly demonstrate the merit of introducing additional temporal attention by comparing the performance of the model with and without temporal attention.

[1]  Alexander J. Smola,et al.  Language Models with Transformers , 2019, ArXiv.

[2]  Rong Xie,et al.  You Are What You Watch and When You Watch: Inferring Household Structures From IPTV Viewing Data , 2014, IEEE Transactions on Broadcasting.

[3]  Andrew Zisserman,et al.  Video Action Transformer Network , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Jure Leskovec,et al.  SEISMIC: A Self-Exciting Point Process Model for Predicting Tweet Popularity , 2015, KDD.

[5]  A. Hawkes Hawkes processes and their applications to finance: a review , 2018 .

[6]  Noah Constant,et al.  Character-Level Language Modeling with Deeper Self-Attention , 2018, AAAI.

[7]  Hongyuan Zha,et al.  Transformer Hawkes Process , 2020, ICML.

[8]  Peter Szolovits,et al.  MIMIC-III, a freely accessible critical care database , 2016, Scientific Data.

[9]  Le Song,et al.  Learning Social Infectivity in Sparse Low-rank Networks Using Multi-dimensional Hawkes Processes , 2013, AISTATS.

[10]  Lukasz Kaiser,et al.  Universal Transformers , 2018, ICLR.

[11]  A. Hawkes Spectra of some self-exciting and mutually exciting point processes , 1971 .

[12]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[13]  P. Alam ‘S’ , 2021, Composites Engineering: An A–Z Guide.

[14]  Razvan Pascanu,et al.  On the difficulty of training recurrent neural networks , 2012, ICML.

[15]  Hongyuan Zha,et al.  Modeling the Intensity Function of Point Process Via Recurrent Neural Networks , 2017, AAAI.

[16]  Hoon Kim,et al.  Monte Carlo Statistical Methods , 2000, Technometrics.

[17]  Emine Yilmaz,et al.  Self-Attentive Hawkes Process , 2020, ICML.

[18]  Gebräuchliche Fertigarzneimittel,et al.  V , 1893, Therapielexikon Neurologie.

[19]  Jian-Wei Liu,et al.  Survival analysis of failures based on Hawkes process with Weibull base intensity , 2020, Eng. Appl. Artif. Intell..

[20]  Lasso and probabilistic inequalities for multivariate point processes , 2015, 1208.0570.

[21]  Alex Graves,et al.  Adaptive Computation Time for Recurrent Neural Networks , 2016, ArXiv.

[22]  Jason Eisner,et al.  The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process , 2016, NIPS.

[23]  Miles Osborne,et al.  Statistical Machine Translation , 2010, Encyclopedia of Machine Learning and Data Mining.

[24]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[25]  Arnold Neumaier,et al.  Introduction to Numerical Analysis , 2001 .

[26]  Yosihiko Ogata,et al.  On Lewis' simulation method for point processes , 1981, IEEE Trans. Inf. Theory.

[27]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.

[28]  P. Reynaud-Bouret,et al.  Adaptive estimation for Hawkes processes; application to genome analysis , 2009, 0903.2919.

[29]  Yoshua Bengio,et al.  Learning long-term dependencies with gradient descent is difficult , 1994, IEEE Trans. Neural Networks.

[30]  John W. Merrill,et al.  Automatic Speech Recognition , 2005 .

[31]  P. Alam ‘A’ , 2021, Composites Engineering: An A–Z Guide.

[32]  P. Alam ‘L’ , 2021, Composites Engineering: An A–Z Guide.

[33]  Hongyuan Zha,et al.  Learning Granger Causality for Hawkes Processes , 2016, ICML.

[34]  Esko Valkeila,et al.  An Introduction to the Theory of Point Processes, Volume II: General Theory and Structure, 2nd Edition by Daryl J. Daley, David Vere‐Jones , 2008 .

[35]  Renaud Lambiotte,et al.  TiDeH: Time-Dependent Hawkes Process for Predicting Retweet Dynamics , 2016, ICWSM.

[36]  Niao He,et al.  Online Learning for Multivariate Hawkes Processes , 2017, NIPS.

[37]  Utkarsh Upadhyay,et al.  Recurrent Marked Temporal Point Processes: Embedding Event History to Vector , 2016, KDD.

[38]  Yiming Yang,et al.  Transformer-XL: Attentive Language Models beyond a Fixed-Length Context , 2019, ACL.

[39]  Rajeev R. Raje,et al.  Improving social harm indices with a modulated Hawkes process , 2018 .