Monte Carlo Statistical Methods
暂无分享,去创建一个
dents. The first six chapters, the sixth added since the first edition, cover mixing processes, density and regression estimation for discrete time processes, density and regression estimation for continuous time processes, and the local time density estimator. The final chapter, also added since the first edition and the only one not devoted to theoretical results, reviews some aspects of implementation and gives examples. The book opens with a synopsis that defines the object of the study as being the construction of time series alternatives to the usual BoxJenkins SARIMA processes. Following that, it proceeds to highlight and summarize the main ideas of the book, beginning with definitions of kernel density and regression estimators and concluding with a brief list of some advantages of nonparametric over parametric time series methods. Specific advantages listed are that they are robust, that deseasonalization is not necessary, and that parametric convergence rates can, under some circumstances, be achieved. Having provided that overview, the book then proceeds in Chapter 1 to lay the theoretical groundwork for the analysis of a wide class of time series by a review of historical results for mixing processes. Results given include Berbee’s and Bradley’s lemmas for coupling, some results for covariances and joint densities including Rio’s, Davydov’s, and Billingsley’s inequalities, some inequalities for partial sums including Hoeffding’s and Bernstein’s, and some limit theorems (laws of large numbers and central limit theorem) for strongly mixing processes. Chapters 2 and 3 cover the analysis of discrete time processes, Chapter 2 focusing on density estimation for sequences of correlated random variables and Chapter 3 on regression estimation and prediction. Topics include some specific kernels, optimal asymptotic quadratic error, uniform almost sure convergence for some kernels, asymptotic normality, and prediction for some stationary and nonstationary processes. These chapters are mainly review; although several results are from earlier papers by the author, they are not, by and large, new. Chapters 4 and 5 consider estimation for continuous time processes and are mainly new results. Their development is a broad parallel of the ’ development of Chapters 2 and 3, with Chapter 4 devoted to density estimation and Chapter 5 covering regression estimation and prediction. Topics and results include optimal and superoptimal asymptotic quadratic error including a minimax bound of Kutoyants (1997) and minimaxity of intermediate rates, optimal and superoptimal uniform convergence rates, asymptotic normality, irregular and admissible sampling, and the convergence rates of continuous-time nonparametric predictors. Some conditions are given under which a nonparametric predictor reaches a parametric convergence rate. Chapter 6 explores the use of local time for unbiased density estimation given a continuous time sample and consists, apart from one result, of new results. A definition is given of local time, followed by two existence criteria for local time, the first due to Geman and Horowitz (1973, 1980) and the second proven by the author. A density estimator based on local time is then defined and shown to be unbiased and consistent. Some results on convergence rates are then given, followed by asymptotic normality, a functional law of the iterated logarithm, and parametric rates for pointwise and uniform convergence. Chapter 7 is a brief summary of some practical aspects of nonparametric time series analysis. These fall into three areas-aspects of implementation. the comparison of nonparametric with parametric methods, and applied examples. The aspects of implementation addressed are variance stabilization via BoxCox transformation, methods for eliminating trend and seasonality, methods for choosing kernel bandwidth, and choosing a suitable order for predicting a Markov process in which the true order of the process is unknown. In comparing nonparametric and parametric methods of time series analysis, the book summarizes the results of Carbon and Delecroix (1993). who considered several simulated autoregressive moving average (ARMA) processes and some real business and engineering datasets, and the results of Rosa (1993), who considered ARMA models with and without generalized autoregressive conditional heteroscedasticity effects. An appendix gives 17 tables summarizing the results of the comparisons. Finally. some examples are given of applying nonparametric methods to finance and economic data. The value of this book is primarily in its theoretical development and, as such, it would be of more interest to researchers in statistical theory and