We apply reverse accumulation to obtain automatic gradients and error estimates of functions which include in their computation a convergent iteration of the form y= Φ(y,u), where y and u are vectors. We suggest an implementation approach which allows this to be done by a fairly routine extension of existing reverse accumulation code. We show how to re-use the computational graph for the fixed point constructor Φ so as to set explicit stopping criteria for the iterations, based on the gradient accuracy required. Our construction allows the gradient vector to be obtained to the same order of accuracy as the objective function values (which is in general the best we can hope to achieve), and the same order of computational cost (which does not explicitly depend upon the number of independent variables.) The technique can be applied to functions which contain several iterative constructions, either serially or nested
[1]
Gershon Kedem,et al.
Automatic Differentiation of Computer Programs
,
1980,
TOMS.
[2]
Bruce Christianson,et al.
Automatic Hessians by reverse accumulation
,
1992
.
[3]
G. Charles.
Automatic differentiation and iterative processes
,
1992
.
[4]
Bruce Christianson,et al.
Reverse accumulation and accurate rounding error estimates for taylor series
,
1992
.
[5]
C. Bischof,et al.
Derivative convergence for iterative equation solvers
,
1993
.
[6]
Bruce Christianson.
Reverse accumulation of functions containing gradients
,
1993
.
[7]
A. Griewank.
Some Bounds on the Complexity of Gradients, Jacobians, and Hessians
,
1993
.