Rates of Convergence for Sequential Monte Carlo Optimization Methods

Sequential Monte Carlo methods of the stochastic approximation (SA) type, with and without constraints, are discussed. The rates of convergence are derived, and the quantities upon which the rates depend, are discussed. Let $\{ {X_n } \}$ denote the SA sequence and define $U_n = (n + 1)^\beta X_n $ for a suitable $\beta > 0$. The $\{ {U_n } \}$ are interpolated into a natural continuous time process, and weak convergence theory is applied to develop the properties of the tails of the sequence. The technique has a number of advantages over past approaches—advantages which are discussed in the paper. It gives more insight (and is apparently more readily generalizable) than do other approaches—and suggests ways of improving the convergence. The particular “dynamical” nature of the approach allows one to say more about the “tail” process—and to do more “decision” (or “control”) analysis with it.