Let the Data Speak for Themselves

An estimation algorithm for stationary random data automatically selects a single time-series (TS) model for a given number of observations. The parameters of that model accurately represent the spectral density and the autocovariance function of the data. The increased computational speed has given the possibility to compute hundreds of TS models and to select only one. The computer program uses a selection criterion to determine the best model type and model order from a large number of candidates. That selected model includes all statistically significant details that are present in the data, and no more. The spectral density of high-order TS models is the same as the raw periodogram, and the autocorrelation function can be the same as the lagged product (LP) estimate. Therefore, the periodogram and the LP autocorrelation function are very high-order TS candidates. However, those high-order models are never selected in practice because they contain many insignificant details. The automatic selection with the algorithm lets the data speak for themselves: a single model is selected without user interaction. The automatic program can be implemented in measurement instruments for maintenance or in radar, by automatically detecting differences in signal properties

[1]  H. Akaike Statistical predictor identification , 1970 .

[2]  S.M. Kay,et al.  Spectrum analysis—A modern perspective , 1981, Proceedings of the IEEE.

[3]  Piet M. T. Broersen,et al.  Automatic spectral analysis with time series models , 2002, IEEE Trans. Instrum. Meas..

[4]  G. Wilson Factorization of the Covariance Generating Function of a Pure Moving Average Process , 1969 .

[5]  Piet M. T. Broersen The performance of spectral quality measures , 2001, IEEE Trans. Instrum. Meas..

[6]  Piet M. T. Broersen,et al.  The quality of models for ARMA processes , 1998, IEEE Trans. Signal Process..

[7]  J. Durbin EFFICIENT ESTIMATION OF PARAMETERS IN MOVING-AVERAGE MODELS , 1959 .

[8]  R. Klees,et al.  How to handle colored noise in large least-squares problems in the presence of data gaps? , 2004 .

[9]  Piet M. T. Broersen,et al.  Automatic Autocorrelation and Spectral Analysis , 2006 .

[10]  David S. Stoffer,et al.  Time series analysis and its applications , 2000 .

[11]  H. Akaike A new look at the statistical model identification , 1974 .

[12]  D. B. Preston Spectral Analysis and Time Series , 1983 .

[13]  P. Broersen,et al.  How to handle colored observation noise in large least-squares problems , 2003 .

[14]  Qing Ye,et al.  USPIO‐enhanced dynamic MRI: Evaluation of normal and transplanted rat kidneys , 2001, Magnetic resonance in medicine.

[15]  James Durbin,et al.  The fitting of time series models , 1960 .

[16]  Piet M. T. Broersen,et al.  Detection of methacholine with time series models of lung sounds , 2000, IEEE Trans. Instrum. Meas..

[17]  Piet M. T. Broersen,et al.  Finite sample criteria for autoregressive order selection , 2000, IEEE Trans. Signal Process..

[18]  J. Cadzow Maximum Entropy Spectral Analysis , 2006 .

[19]  D. Liley,et al.  Drug-induced modification of the system properties associated with spontaneous human electroencephalographic activity. , 2003, Physical review. E, Statistical, nonlinear, and soft matter physics.

[20]  J. Tukey,et al.  An algorithm for the machine calculation of complex Fourier series , 1965 .

[21]  J. Makhoul,et al.  Linear prediction: A tutorial review , 1975, Proceedings of the IEEE.