solution. The functions that require zeroing are real functions of real variables and it will be assumed that they are continuous and differentiable with respect to these variables. In many practical examples they are extremely complicated anld hence laborious to compute, an-d this fact has two important immediate consequences. The first is that it is impracticable to compute any derivative that may be required by the evaluation of the algebraic expression of this derivative. If derivatives are needed they must be obtained by differencing. The second is that during any iterative solution process the bulk of the computing time will be spent in evaluating the functions. Thus, the most efficient process will tenid to be that which requires the smallest number of function evaluations. This paper discusses certain modificatioins to Newton's method designed to reduce the number of function evaluations required. Results of various numerical experiments are given and conditions under which the modified versions are superior to the original are tentatively suggested.
[1]
P. Morse,et al.
Principles of Numerical Analysis
,
1954
.
[2]
J. Gillis,et al.
Matrix Iterative Analysis
,
1961
.
[3]
Brian Randell,et al.
The Whetstone KDF9 ALGOL Translator
,
1962
.
[4]
Roger Fletcher,et al.
A Rapidly Convergent Descent Method for Minimization
,
1963,
Comput. J..
[5]
Ferdinand Freudenstein,et al.
Numerical Solution of Systems of Nonlinear Equations
,
1963,
JACM.
[6]
M. J. D. Powell,et al.
An efficient method for finding the minimum of a function of several variables without calculating derivatives
,
1964,
Comput. J..
[7]
William Kizner,et al.
A Numerical Method for Finding Solutions of Nonlinear Equations
,
1964
.
[8]
M. J. D. Powell,et al.
A Method for Minimizing a Sum of Squares of Non-Linear Functions Without Calculating Derivatives
,
1965,
Comput. J..