Measure of predictability.

Many techniques have been developed to measure the difficulty of forecasting data from an observed time series. This paper introduces a measure which we call the "forecast entropy" designed to measure the predictability of a time series. We use attractors reconstructed from the time series and the distributions in the regular and tangent spaces of the data which comprise the attractor. We then consider these distributions on different scales. We present a formula for calculating the forecast entropy. To provide a standard of predictability, we define an idealized random system whose forecast entropy will be maximal; we then use this measure to rescale the forecast entropy to lie in the range [0,1]. The time series obtained from several chaotic systems as well as from a pseudorandom system are studied using this measure. We present evidence that the forecast entropy can be used as a tool for determining optimal delays and embedding dimensions used for reconstructing better attractors. We also show that the forecast entropy of a random system has completely different characteristics from that of a deterministic one.

[1]  Mees,et al.  Singular-value decomposition and embedding dimension. , 1987, Physical review. A, General physics.

[2]  D. Signorini,et al.  Neural networks , 1995, The Lancet.

[3]  S. K. Park,et al.  Random number generators: good ones are hard to find , 1988, CACM.

[4]  E. Lorenz Deterministic nonperiodic flow , 1963 .

[5]  C. Essex,et al.  Correlation dimension and systematic geometric effects. , 1990, Physical Review A. Atomic, Molecular, and Optical Physics.

[6]  Claude E. Shannon,et al.  The mathematical theory of communication , 1950 .

[7]  A. Vulpiani,et al.  Predictability: a way to characterize complexity , 2001, nlin/0101029.

[8]  Antanas Cenys,et al.  Estimation of the number of degrees of freedom from chaotic time series , 1988 .

[9]  Schwartz,et al.  Singular-value decomposition and the Grassberger-Procaccia algorithm. , 1988, Physical review. A, General physics.

[10]  Andrew M. Fraser,et al.  Information and entropy in strange attractors , 1989, IEEE Trans. Inf. Theory.

[11]  Patrick K. Simpson,et al.  Neural Networks Theory, Technology and Applications , 1995 .

[12]  Christopher Essex,et al.  Fractal dimension: Limit capacity or Hausdorff dimension? , 1990 .

[13]  Jorma Rissanen,et al.  Stochastic Complexity in Statistical Inquiry , 1989, World Scientific Series in Computer Science.

[14]  H. Abarbanel,et al.  Determining embedding dimension for phase-space reconstruction using a geometrical construction. , 1992, Physical review. A, Atomic, molecular, and optical physics.

[15]  D. Ruelle,et al.  Ergodic theory of chaos and strange attractors , 1985 .

[16]  O. Rössler An equation for continuous chaos , 1976 .

[17]  Kevin M. Short,et al.  Steps Toward Unmasking Secure Communications , 1994 .

[18]  C. E. SHANNON,et al.  A mathematical theory of communication , 1948, MOCO.

[19]  Fraser,et al.  Independent coordinates for strange attractors from mutual information. , 1986, Physical review. A, General physics.

[20]  P. L. Melo,et al.  Forced oscillation technique in the sleep apnoea/hypopnoea syndrome: identification of respiratory events and nasal continuous positive airway pressure titration. , 2003, Physiological measurement.

[21]  F. A. Seiler,et al.  Numerical Recipes in C: The Art of Scientific Computing , 1989 .

[22]  A. Fraser Reconstructing attractors from scalar time series: A comparison of singular system and redundancy criteria , 1989 .

[23]  Peng,et al.  Synchronizing hyperchaos with a scalar transmitted signal. , 1996, Physical review letters.

[24]  P. Landsberg,et al.  Simple measure for complexity , 1999 .

[25]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[26]  James A. Yorke,et al.  Is the dimension of chaotic attractors invariant under coordinate changes? , 1984 .