Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems

We investigate the relation between two quantitative mea- sures characterizing short term memory in input driven dynamical systems, namely the short term memory capacity (MC) (2) and the Fisher memory curve (FMC) (1). We show that under some assumptions, the two quan- tities can be interpreted as squared 'Mahalanobis' norms of images of the input vector under the system's dynamics and that even though MC and FMC map the memory structure of the system from two quite different perspectives, they can be linked by a close relation. Input driven dynamical systems play an important role as machine learning models when data sets exhibit temporal dependencies, e.g. in prediction or control. In an attempt to characterize dynamic properties of such systems, measures have been suggested to quantify how well past information can be represented in the system's internal state. In this contribution we investigate two such well known measures, namely the short term memory capacity spectrum MCk (2) and the Fisher memory curve J(k) (1). The two quantities map the memory structure of the system under investigation from two quite different perspectives. So far their relation has not been closely investigated. In this paper we take the first step to bridge this gap and show that under some conditions MCk and J(k) can be related in an interpretable manner. 2 Background We study linear input driven state space models with N-dimensional state space and univariate inputs and outputs. Such systems can be represented e.g. by linear Echo State Networks (ESN) (3) with N recurrent (reservoir) units. The activations of the input, internal (state), and output units at time step t are de- noted by s(t), x(t), and y(t), respectively. The input-to-recurrent and recurrent- to-output unit connections are given by N-dimensional weight vectors v and u, respectively; connections between the internal units are collected in an NN weight matrix W. We assume there are no feedback connections from the output to the reservoir and no direct connections from the input to the output. Under

[1]  Herbert Jaeger,et al.  Reservoir computing approaches to recurrent neural network training , 2009, Comput. Sci. Rev..

[2]  Peter Tiño,et al.  Minimum Complexity Echo State Network , 2011, IEEE Transactions on Neural Networks.

[3]  Surya Ganguli,et al.  Memory traces in dynamical systems , 2008, Proceedings of the National Academy of Sciences.