Explainable nonlinear modelling of multiple time series with invertible neural networks

A method for nonlinear topology identification is proposed, based on the assumption that a collection of time series are generated in two steps: i) a vector autoregressive process in a latent space, and ii) a nonlinear, component-wise, monotonically increasing observation mapping. The latter mappings are assumed invertible, and are modeled as shallow neural networks, so that their inverse can be numerically evaluated, and their parameters can be learned using a technique inspired in deep learning. Due to the function inversion, the backpropagation step is not straightforward, and this paper explains the steps needed to calculate the gradients applying implicit differentiation. Whereas the model explainability is the same as that for linear VAR processes, preliminary numerical tests show that the prediction error becomes smaller.

[1]  Georgios B. Giannakis,et al.  Topology Identification and Learning over Graphs: Accounting for Nonlinearities and Dynamics , 2018, Proceedings of the IEEE.

[2]  Clive W. J. Granger,et al.  Some recent developments in a concept of causality , 2001 .

[3]  A Semiparametric Estimation for the Nonlinear Vector Autoregressive Time Series Model , 2017 .

[4]  Georgios B. Giannakis,et al.  Nonlinear Structural Vector Autoregressive Models With Application to Directed Brain Networks , 2019, IEEE Transactions on Signal Processing.

[5]  Georgios B. Giannakis,et al.  Semi-Blind Inference of Topologies and Dynamical Processes Over Dynamic Graphs , 2018, IEEE Transactions on Signal Processing.

[6]  E. Fox,et al.  Neural Granger Causality , 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[7]  F. Yanuar The Estimation Process in Bayesian Structural Equation Modeling Approach , 2014 .

[8]  Emily B. Fox,et al.  An Interpretable and Sparse Neural Network Model for Nonlinear Granger Causality Discovery , 2017, 1711.08160.

[9]  Baltasar Beferull-Lozano,et al.  Online Topology Identification From Vector Autoregressive Time Series , 2019, IEEE Transactions on Signal Processing.

[10]  Baltasar Beferull-Lozano,et al.  Online Non-linear Topology Identification from Graph-connected Time Series , 2021, 2021 IEEE Data Science and Learning Workshop (DSLW).

[11]  Natalia Gimelshein,et al.  PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.

[12]  D. Palomar,et al.  Parameter Estimation for Student’s t VAR Model with Missing Data , 2021, ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[13]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..

[14]  Mei Li,et al.  Searching Correlated Patterns From Graph Streams , 2020, IEEE Access.

[15]  Aapo Hyvärinen,et al.  Independent innovation analysis for nonlinear vector autoregressive process , 2020, AISTATS.

[16]  João Ricardo Sato,et al.  Granger Causality in Systems Biology: Modeling Gene Networks in Time Series Microarray Data Using Vector Autoregressive Models , 2010, BSB.

[17]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[18]  Soosan Beheshti,et al.  Automatic Order Selection in Autoregressive Modeling with Application in EEG Sleep-Stage Classification , 2021, ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[19]  Georgios B. Giannakis,et al.  ONLINE IDENTIFICATION OF DIRECTIONAL GRAPH TOPOLOGIES CAPTURING DYNAMIC AND NONLINEAR DEPENDENCIES† , 2018, 2018 IEEE Data Science Workshop (DSW).