Monte Carlo Strategies in Scientific Computing
暂无分享,去创建一个
The strength of this book is in bringing together advanced Monte Carlo (MC) methods developed in many disciplines. This intent is clear from the outset: “Many researchers in different scienti c areas have contributed to its development: : : communications among researchers in these elds are very limited. It is therefore desirable to develop a relatively general framework in which scientists in every eld: : : can compare their Monte Carlo techniques and learn from one another.” Throughout the book are examples of techniques invented, or reinvented, in different elds that may be applied elsewhere. This is occasionally embarrassing to those of us who are statisticians. Consider this statement: “Using the HMC to solve statistical inference problems was rst introduced by Neil (1996). This effort was only 10 years behind that in physics and theoretical chemistry. In contrast, statisticians were 40 years late in using the Metropolis algorithm.” The book serves “three audiences: researchers specializing in the study of Monte Carlo algorithms; scientists who are interested in using advanced Monte Carlo techniques; and graduate students...second-year graduate-level course on Monte Carlo methods.” Chapter 1 gives an overview and a variety of applications. These include the Ising model, molecular structure simulation, bioinformatics, target tracking, hypothesis testing for astronomical observations, Bayesian inference of multilevel models, missing-data problems. Chapter 2 covers basic MC methods and begins sequential methods, including exact sampling for chain-structured models, and sequential importance sampling and rejection control, with applications in solving a linear system, missing data, and populations genetics. Chapter 3 expands on sequential methods. The common thread is that each observation from a multivariate distribution is generated sequentially from approximate conditional distributions. The ratio between the joint density (of dimensions generated so far) and the approximation is an importance sampling weight and is a martingale; for high-dimensional problems, this tends to diverge, with most observations having weights near 0 and a few having high weight. Remedies include a variety of pruning and enrichment (also known as Russian roulette and splitting) and resampling techniques. Applications include growing a polymer, missing data, nonlinear ltering, and (in Chap. 4) molecular simulation, population genetics, motif patterns in DNA sequences, counting 0–1 tables with xed margins, parametric Bayes analysis, approximating permanents, target tracking, and digital communications. Chapter 5 introduces Markov chain Monte Carlo (MCMC) methods, with Metropolis–Hastings and a number of generalizations, including multipoint, reversible jumping, and dynamic weighting rules. Chapters 6–8 treat MCMC methods based on the Gibbs sampler, including data augmentation, cluster algorithms, partial resampling, slice sampler, metropolized Gibbs, hit-and-run, random-ray, collapsing and grouping, the Swendsen–Wang algorithm as data augmentation, transformation groups, and generalized Gibbs. Applications include Gaussian random elds, texture synthesis Bayesian probit regression, stochastic differential equations, hierarchical Bayes, nding motifs in protein or DNA sequences, Ising and Potts models, inference with multivariate t distributions, and parameter expansion for data augmentation. Chapter 9 considers hybrid MC and a connection to molecular dynamics algorithms used in structural biology and theoretical chemistry. Also covered are some strategies for improving ef ciency, including surrogate transition, window, and multipoint methods, and applications in Bayesian analysis and stochastic volatility. Chapters 10 and 11 discuss recent methods for ef cient MC sampling, including temperature-based methods (simulated tempering, parallel tempering, and simulated annealing), reweighting methods (umbrella sampling and multicanonical sampling) and evolution-based methods (adaptive direction sampling and conjugate gradient MC). Chapters 12 and 13 cover theory for Markov chains and their convergence rates. The book focuses on relatively more dif cult MC applications where “directly generating independent samples from the target distribution is not feasible.” It omits discussion of some relatively simple MC techniques that are valuable in applications where direct generation is feasible and which could be adapted for other applications; e.g. strati ed sampling (the “strati ed sampling” technique discussed here is unusual and of limited value) post-strati cation, and defensive mixture designs in importance sampling (Hesterberg 1995). The treatment of importance sampling (IS) could be improved. The book describes the original motivation for IS—focusing attention on “important” regions—then indicates:
[1] Tim Hesterberg,et al. Importance sampling for Bayesian estimation , 1992 .
[2] T. Hesterberg,et al. Weighted Average Importance Sampling and Defensive Mixture Distributions , 1995 .