On search directions for minimization algorithms
暂无分享,去创建一个
Some examples are given of differentiable functions of three variables, having the property that if they are treated by the minimization algorithm that searches along the coordinate directions in sequence, then the search path tends to a closed loop. On this loop the gradient of the objective function is bounded away from zero. We discuss the relevance of these examples to the problem of proving general convergence theorems for minimization algorithms that use search directions.
[1] H. B. Curry. The method of steepest descent for non-linear minimization problems , 1944 .
[2] H. H. Rosenbrock,et al. An Automatic Method for Finding the Greatest or Least Value of a Function , 1960, Comput. J..
[3] E. Polak,et al. Computational methods in optimization : a unified approach , 1972 .