Subsampling
暂无分享,去创建一个
level or as a useful reference book for the applied researcher or practitioner. Speaking as a practitioner, I found many parts of the book to be extremely useful. The main aims of the book are to give a brief coverage of the major MCMC methods for sampling from posterior distributions and then focus extensively on methods for computing posterior quantities of interest. Topics such as marginal densities, ratios of normalizing constants, constrained parameter estimation, highest posterior density intervals, and methods for model comparison and selection are given thorough treatment in separate chapters of the book. The book begins by describing several datasets that are used throughout the book as examples. These cover a variety of types of data and are useful support for a number of diverse types of models throughout the book. A number of references point the interested reader to standard analyses of these data that can be used for comparison with the Bayesian techniques described later in the text. Chapter 2 gives an overview of the main techniques used in MCMC sampling. The rst section starts out with the Gibbs sampler. Detailed examples of its use for a bivariate normal model and then for a constrained regression problem in which the model coef cients are constrained to be increasing are thorough and comprehensive. Equally thorough discussions of the Metropolis– Hastings algorithm, the hit-and-run algorithm, and the multiple-try Metropolis algorithm follow. A nice feature of this section is the use of different algorithms on the same set of example data. The chapter continues covering methods used to improve the ef ciency of MCMC sampling. The section on grouping, collapsing, and reparameterization gives several different algorithms for dealing with ordinal response models. I found this section of the book to be the most useful because of the clear exposition of the algorithms and the detailed description of the steps taken in the MCMC sampling. The chapter goes on to discuss acceleration, a random direction importance sampler that is a step toward a “black box” sampling algorithm, and nally, some material on convergence diagnostics. Chapter 3 begins the main part of the text, covering basic Monte Carlo methods for estimating posterior quantities. Chapter 4 covers methods for estimating posterior marginal densities and includes several detailed proofs. Chapter 5 gives an overview of methods used for estimating ratios of normalizing constants. Importance sampling, path sampling, bridge sampling, and variations of each are all discussed. Extensions to problems of differing dimensions are also covered. A detailed application of these techniques to choose between various link functions in a generalized linear model is presented. Chapter 6 extends the concepts explored in Chapter 5 to problems that involve constraints on parameters. This chapter gives a brief overview of the bene ts of calculating normalizing constants for constrained parameter problems and then launches into two detailed examples. Chapter 7 proceeds similarly covering calculation of Bayesian credible and HPD intervals. The remaining chapters treat a variety of special topics in MCMC modeling. Chapter 8 covers approaches for comparing nonnested models, Chapter 9 gives an extensive treatment of Bayesian variable selection techniques, and Chapter 10 gives a sampling of other topics. The book ends with over 280 references. Every chapter ends with several exercises. Unfortunately, many of them look as if they would take an entire semester to complete. Typically they involve constructing several alternative algorithms for analyzing the data and comparing them. Given that excellent programming pro ciency is necessary, not to mention the time required to run the analyses an adequate number of iterations, it seems a student would have to devote full time to this single course to have any hope of completing many of the exercises. The book lls an important niche by pulling together a wide variety of material that is usually only available in diverse journals and technical reports. The topics are given a thorough mathematical treatment and the detailed examples make it possible for someone without rigorous mathematical training to implement many of the described algorithms. The step-by-step logic behind each method and the wide variety of topics covered makes this a valuable reference book for anyone who regularly works with Bayesian models.
[1] M. Mackisack,et al. Chance Encounters: A First Course in Data Analysis and Inference , 1999 .