Asymptotic behavior of stochastic approximation and large deviations

The theory of large deviations is applied to the study of the asymptotic properties of the stochastic approximation algorithms (1.1) and (1.2). The method provides a useful alternative to the currently used technique of obtaining rate of convergence results by studying the sequence {(Xn-?)/?an} (for (1.1)), where ? is a 'stable' point of the algorithm. Let G be a bounded neighborhood of ?, which is in the domain of attraction of ? for the 'limit ODE'. The process xn(?) is defined as a 'natural interpolation' of {Xj,j?n} with xn(0) = Xn, and interpolation intervals {aj,j?n}. Define ?G n = min{t:xn(t)?G}. Then it is shown (among other things) that Px{?G n ? T} ~ exp-nqV, where q depends on {an,cn}, and V depends on the b(?) cov ?n, and G. Such estimates imply that the asymptotic behavior is much better than suggested by the 'local linearization methods', and they yield much new insight into the asymptotic behavior. The technique is applicable to related problems in the asymptotic analysis of recursive algorithms, and requires weaker conditions on the dynamics than do the 'linearization methods'. The necessary basic background is provided and the optimal control problems associated with getting the V above are derived.