The ability to quantify complex relationships within multivariate time series is a key component of modelling many physical systems, from the climate to brains and other biophysical phenomena. Unfortunately, even testing the significance of simple dependence measures, such as Pearson correlation, is complicated by altered sampling properties when autocorrelation is present in the individual time series. Moreover, it has been recently established that commonly used multivariate dependence measures---such as Granger causality---can produce substantially inaccurate results when applying classical hypothesis-testing procedures to digitally-filtered time series. Here, we suggest that the digital filtering-induced bias in Granger causality is an effect of autocorrelation, and we present a principled statistical framework for the hypothesis testing of a large family of linear-dependence measures between multiple autocorrelated time series. Our approach unifies the theoretical foundations established by Bartlett and others on variance estimators for autocorrelated signals with the more intricate multivariate measures of linear dependence. Specifically, we derive the sampling distributions and subsequent hypothesis tests for any measure that can be decomposed into terms that involve independent partial correlations, which we show includes Granger causality and mutual information under a multivariate linear-Gaussian model. In doing so, we provide the first exact tests for inferring linear dependence between vector autoregressive processes with limited data. Using numerical simulations and brain-imaging datasets, we demonstrate that our newly developed tests maintain the expected false-positive rate (FPR) with minimally-sufficient samples, while the classical log-likelihood ratio tests can yield an unbounded FPR depending on the parameters chosen.
[1]
R. Fildes.
Journal of the Royal Statistical Society (B): Gary K. Grunwald, Adrian E. Raftery and Peter Guttorp, 1993, “Time series of continuous proportions”, 55, 103–116.☆
,
1993
.
[2]
Robert B. Ash,et al.
Information Theory
,
2020,
The SAGE International Encyclopedia of Mass Media and Society.
[3]
J. Herskowitz,et al.
Proceedings of the National Academy of Sciences, USA
,
1996,
Current Biology.
[4]
W. Marsden.
I and J
,
2012
.
[5]
Pierre Rochus,et al.
Hu, Luojia , Estimation of a censored dynamic panel data model,Econometrica. Journal of the Econometric Society
,
2002
.
[6]
Chris Chatfield,et al.
The Analysis of Time Series: An Introduction
,
1981
.
[7]
Paul P Wang.
Information Sciences 2007
,
2007
.
[8]
R. Cox,et al.
Journal of the Royal Statistical Society B
,
1972
.
[10]
D. Rand.
Dynamical Systems and Turbulence
,
1982
.
[11]
J. Kiefer,et al.
An Introduction to Stochastic Processes.
,
1956
.
[12]
Richard F. Gunst,et al.
Applied Regression Analysis
,
1999,
Technometrics.
[13]
L. Breuer.
Introduction to Stochastic Processes
,
2022,
Statistical Methods for Climate Scientists.
[14]
O. William.
Journal Of The American Statistical Association V-28
,
1932
.
[15]
J. A. Stewart,et al.
Nonlinear Time Series Analysis
,
2015
.
[16]
L. Goddard.
Information Theory
,
1962,
Nature.
[17]
R. Pintner,et al.
Crossroads in the Mind of Man: A Study of Differentiable Mental Abilities.
,
1929
.