Nonlinear Time Series: Nonparametric and Parametric Methods

Although Nonlinear Time Series is the only part of the title to appear on the spine of this new book by Fan and Yao, the word “nonparametric” in the subtitle really deserves top billing. There are hints here and there that the authors follow the viewpoint emphasized by Tong (1990), that there is a true underlying nonlinear dynamic law generating the time series data. Nonparametric methods can help uncover this dynamic law. But the book also works from a purely pragmatic standpoint; nonparametric estimation methods are useful in constructing effective forecasting algorithms, whatever the true dynamic law. The standard time series modeling paradigm for building a forecast algorithm reduces an observed series to approximate stationarity through removal of trend and seasonal components, identifies the covariance structure and possible parametric models through the estimated autocovariance function of the resulting series, selects among competing estimated models through order-selection criteria (such as the Akaike information criterion) and residual diagnostics, and then computes optimal predictors for future values of the series. Autoregressive moving average (ARMA) models form a broad class of linear models that can approximate quite general autocovariance structures arbitrarily closely. Dependence outside the second-order moment structure, on the other hand, may require nonlinear models. Nonparametric methods have a long history in time series analysis and appear throughout the standard modeling paradigm, particularly in estimation of trend and seasonal components for nonstationary time series and in estimation of spectral density functions and marginal probability densities for stationary time series. The authors hope to extend the use of nonparametric tools for identifying and estimating nonlinear time series models. These models may have flexible nonparametric specifications, or the nonparametric analysis may suggest parametric nonlinear models. Nonparametric methods are also useful in constructing predictors for nonlinear processes. The authors provide considerable background material on both time series and nonparametrics. Introductory chapters on characteristics of time series (Chap. 2) and ARMA modeling and forecasting (Chap. 3) borrow heavily from texts by Brockwell and Davis (1991, 2002) and associated ITSM software. Subsequent chapters review important classes of parametric nonlinear time series models [threshold, generalized autoregressive conditional heteroscedastic (GARCH), and bilinear; Chap 4] and introduce nonparametric methods through density estimation (Chap. 5) and spectral density estimation (Chap. 7). The subjects of main interest—those that really set this book apart—are concentrated in later chapters: smoothing with dependent data (Chap. 6), nonparametric time series models (Chap. 8), model validation (Chap. 9), and nonlinear prediction (Chap. 10). The broad range of topics covered in this book makes for a large and awkward load. It is like coming home from the grocery store and trying to get all of the bags into the house in one trip; losing a few things on the way up the steps, crushing a few more while pushing through the door, and cracking one or two eggs when dropping the bags on the counter. Everything in the bags must be examined carefully for scratches, bruises, and breaks, and some items are lost altogether. This book has scratches scattered throughout, in the form of abundant errors and inconsistencies in the technical typesetting. There are quite a few bruises as well: incorrect figure references, typos in formulas, garbled phrases, and terms used before they are defined. Some breaks are noticeable, especially the proof of Theorem 7.4. The maximum periodogram ordinate for an iid non-Gaussian sequence, suitably normalized, does indeed converge in distribution to the standard Gumbel distribution. The authors’ argument makes this result appear trivial, but the key approximation they use is incorrect. (See Davis and Mikosch 1999 for a valid proof, which relies on a Gaussian approximation technique for sums of independent random vectors.) Elsewhere, there are cracks in the exposition, with statements that are not quite right, like the claim on page 420 that point transformations of weakly stationary series are weakly stationary. After taking stock of the damage, we might ask whether anything is missing. One omission is suggested by the authors’ comments on page 16 that “the validity of a parametric model for a large real data set over a long time span is always questionable,” and that this, among other factors, has “led to a rapid development of computationally intensive methodologies. . . that are designed to identify complicated data structures by exploring local lower-dimensional structures.” These comments seem to ignore the possibility of parametric hierarchical models, which often take the form of parameter-driven generalized state-space models in the time series context. Such models can capture a variety of nonstationary and nonlinear behaviors (e.g., Kitagawa 1987; Harvey 1989; Durbin and Koopman 2001). The hierarchical model specifies dynamics of observations given time-dependent “local parameters” (or states) and dynamics of local parameters given time-invariant “global parameters” (or hyperparameters). Such hierarchical models can often successfully describe real datasets over long time spans by allowing the local parameters to change smoothly over time, suggesting that this parametric methodology has some relationship with nonparametric methods. Indeed, certain smoothing splines can be computed using the Kalman recursions, because they are the optimal fixed-interval smoothers for an integrated random walk plus noise, a simple hierarchical time series (see, e.g., Durbin and Koopman 2001 and references therein). Some mention of this relationship, and perhaps some discussion of the authors’ perspective on the use of nonparametric methods in the identification of hierarchical models, would have been nice to see. Despite these problems, this book has much that is interesting and useful. The discussions of ergodicity in Section 2.1.4 and of mixing conditions in Section 2.6 are handy. The presentation of ARCH and GARCH models in Section 4.2 is a concise introduction to this vast literature from a statistics standpoint, and Chapter 5 gives a nice overview of nonparametric density estimation, with particular emphasis on results and references for density estimation with dependent data. In fact, most chapters end with extensive bibliographical notes. These will certainly be valuable resources for researchers, particularly in the later chapters that describe evolving areas. These later chapters, notably Chapters 6 and 8–10, constitute the book’s main contribution—topics not found in typical time series or nonparametrics texts. Chapter 6 covers smoothing with dependent data, in both the time domain (a standard topic in traditional time series analysis) and the state domain (not so standard). Chapter 8, on nonparametric time series models, includes functional coefficient autoregressions, additive autoregressions, and index models, among others. Model validation, in Chapter 9, focuses on generalized likelihood ratios for testing against nonparametric alternatives (for which the nonparametric maximum likelihood estimator may not exist or may be too constrained to be of use). Chapter 10 covers nonlinear prediction, including point predictors, minimum-length prediction intervals, and predictive distributions. Many of the interesting ideas presented in these chapters are highlighted through examples worked out in detail. These include both classic datasets (Canadian lynx and Wolf’s sunspots, naturally), and new examples. Chapter 8 provides some informative examples of financial applications (a technical trading rule applied to pound/dollar exchange rates, and a value-at-risk analysis for the Standard and Poor’s 500 index). The material presented in these chapters includes techniques that anyone with a solid background in time series analysis could appreciate and could implement immediately with standard software. Even so, as the authors point out, there are plenty of open questions about these techniques. This makes the material in these chapters appealing to practitioners and researchers alike, although the practitioner would need to pick through lots of technical detail to extract the useful applied bits. Nonlinear Time Series: Nonparametric and Parametric Methods is best suited as a stimulating research monograph. It is not a textbook (in particular, it has no exercises), but it has sufficient breadth that it could serve as the focus of a graduate reading course or as a source of supplemental teaching materials for an advanced time series class.