There is a large literature on the rate of convergence problem for general unconstrained stochastic approximations. Typically, one centers the iterate $\theta_n$ about the limit point $\bar\theta$ and then normalizes by dividing by the square root of the step size $\epsilon_n$. Then some type of convergence in distribution or weak convergence of Un, the centered and normalized iterate, is proved. For example, one proves that the interpolated process formed by the Un, converges weakly to a stationary Gaussian diffusion, and the variance of the stationary measure is taken to be a measure of the rate of convergence. See the references in [A. Benveniste, M. Metivier, and P. Priouret, Adaptive Algorithms and Stochastic Approximation, Springer-Verlag, Berlin, New York, 1990; L. Gerencer, SIAM J. Control Optim., 30 (1992), pp. 1200--1227; H. J. Kushner and D. S. Clark, Stochastic Approximation for Constrained and Unconstrained Systems, Springer-Verlag, Berlin, New York, 1978; H. J. Kushner and G. Yin, Stochastic Approximation Algorithms and Applications, Springer-Verlag, Berlin, New York, 1997; M. T. Wasan, Stochastic Approximation, Cambridge University Press, Cambridge, UK, 1969] for algorithms where the step size either goes to zero or is small and constant. Large deviations provide an alternative approach to the rate of convergence problem [P. Dupuis and H. J. Kushner, SIAM J. Control Optim., 23 (1985), pp. 675--696; P. Dupuis and H. J. Kushner, SIAM J. Control Optim., 27 (1989), pp. 1108--1135; P. Dupuis and H. J. Kushner, Probab. Theory Related Fields, 75 (1987), pp. 223--244; A. P. Korostelev, Stochastic Recurrent Processes, Nauka, Moscow, 1984; H. J. Kushner and G. Yin, Stochastic Approximation Algorithms and Applications, Springer-Verlag, Berlin, New York, 1997]. When the iterates of the algorithm are constrained to lie in some bounded set, the limit point is frequently on the boundary. With the exception of the large deviations type [P. Dupuis and H. J. Kushner, SIAM J. Control Optim., 23 (1985), pp. 675--696; P. Dupuis and H. J. Kushner, Probab. Theory Related Fields, 75 (1987), pp. 223--244], the rate of convergence literature is essentially confined to the case where the limit point is not on a constraint boundary.
When the limit point is on the boundary of the constraint set the usual steps are hard to carry out. In particular, the stability methods which are used to prove tightness of the normalized iterates cannot be carried over in general, and there is the problem of proving tightness of the normalized process and characterizing the limit process.
This paper develops the necessary techniques and shows that the stationary Gaussian diffusion is replaced by an appropriate stationary reflected linear diffusion, whose variance plays the same role as a measure of the rate of convergence. An application to constrained function minimization under inequality constraints $q^i(x)\le 0,i\le p$, is given, where both the objective function and the constraints are observed in the presence of noise. The iteration is on both the basic state variable and a Lagrange multiplier, which is constrained to be nonnegative. If a limit multiplier value for an active constraint is zero, then the classical method for computing the rate cannot be used, but (under appropriate conditions) it is a special case of our results. Rate of convergence results are important because, among other reasons, they immediately yield the advantages of iterate averaging methods, as noted in [H. J. Kushner and G. Yin, Stochastic Approximation Algorithms and Applications, Springer-Verlag, Berlin, New York, 1997].
[1]
H. Kushner.
Numerical Methods for Stochastic Control Problems in Continuous Time
,
2000
.
[2]
J. Schwartz,et al.
Linear Operators. Part I: General Theory.
,
1960
.
[3]
M. T. Wasan.
Stochastic Approximation
,
1969
.
[4]
P. Dupuis,et al.
Stochastic Approximations via Large Deviations: Asymptotic Properties
,
1985
.
[5]
V. Nollau.
Kushner, H. J./Clark, D. S., Stochastic Approximation Methods for Constrained and Unconstrained Systems. (Applied Mathematical Sciences 26). Berlin‐Heidelberg‐New York, Springer‐Verlag 1978. X, 261 S., 4 Abb., DM 26,40. US $ 13.20
,
1980
.
[6]
Harold J. Kushner,et al.
Approximation and Weak Convergence Methods for Random Processes
,
1984
.
[7]
Harold J. Kushner,et al.
Stochastic Approximation Algorithms and Applications
,
1997,
Applications of Mathematics.
[8]
Pierre Priouret,et al.
Adaptive Algorithms and Stochastic Approximations
,
1990,
Applications of Mathematics.
[9]
Ruth J. Williams,et al.
Brownian Models of Open Queueing Networks with Homogeneous Customer Populations
,
1987
.
[10]
P. Dupuis,et al.
On Lipschitz continuity of the solution mapping to the Skorokhod problem
,
1991
.
[11]
Ruth J. Williams,et al.
Lyapunov Functions for Semimartingale Reflecting Brownian Motions
,
1994
.
[12]
L. Gerencsér.
Rate of convergence of recursive estimators
,
1992
.
[13]
G. Papanicolaou,et al.
Stability and Control of Stochastic Systems with Wide-band Noise Disturbances. I
,
1978
.
[14]
H. Kushner,et al.
Stochastic approximation of constrained systems with system and constraint noise
,
1975,
at - Automatisierungstechnik.
[15]
Harold J. Kushner,et al.
Large deviations estimates for systems with small noise effects, and applications to stochastic systems theory
,
1986
.
[16]
E. Dynkin.
Functionals of Markov processes
,
1965
.
[17]
P. Dupuis,et al.
Asymptotic behavior of constrained stochastic approximations via the theory of large deviations
,
1987
.
[18]
J. Doob.
Asymptotic properties of Markoff transition prababilities
,
1948
.
[19]
P. Dupuis,et al.
Stochastic approximation and large deviations: upper bounds and w.p.1 convergence
,
1989
.
[20]
G. Papanicolaou,et al.
Stability and control of stochastic systems with wide-band noise disturbances
,
1977
.