In the optimization of complex systems where at best one can only evaluate the function being optimized, often via a simulation experimental process, Response Surface Methodology (RSM) is frequently the method of choice. However, RSM invariably employs a first-order steepest descent method, presumably because of the prohibitive cost of employing second-order designs, except for the canonical analysis that is conducted at termination. In this paper, we suggest the use of deflected/conjugate gradient approaches that can be gainfully employed to improve the performance of RSM. These methods involve the same effort per iteration and require only first-order designs. However, instead of discarding previously generated information, the method utilizes these computations to advantageously build upon accumulated knowledge regarding the second-order curvature of the function being optimized, hence accelerating convergence by circumventing the characteristic zigzagging of steepest descent methods. The proposed scheme is illustrated using various test examples from the literature for which optimal solutions are known and that serve the purpose of exhibiting the potential of the recommended methodology.
This paper attempts to improve the search techniques being currently used in standard Response Surface Methodology (RSM) algorithms. RSM is a collection of mathematical and statistical techniques for experimental optimization. This work presents a novel RSM algorithm that incorporates certain gradient deflection methods, augmented with appropriate restarting criteria, as opposed to using the path of steepest descent as the only search direction. In order to investigate the use of the new RSM algorithm in comparison with the standard existing RSM techniques, a set of standard test functions is used, both with and without random perturbations. Computational results exhibit the improvements achieved under the proposed algorithm.
[1]
M. J. D. Powell,et al.
Restart procedures for the conjugate gradient method
,
1977,
Math. Program..
[2]
G. Box,et al.
On the Experimental Attainment of Optimum Conditions
,
1951
.
[3]
H. Sherali,et al.
A primal-dual conjugate subgradient algorithm for specially structured linear and convex programming problems
,
1989
.
[4]
C. M. Reeves,et al.
Function minimization by conjugate gradients
,
1964,
Comput. J..
[5]
R. Fildes.
Journal of the Royal Statistical Society (B): Gary K. Grunwald, Adrian E. Raftery and Peter Guttorp, 1993, “Time series of continuous proportions”, 55, 103–116.☆
,
1993
.
[6]
M. Hestenes,et al.
Methods of conjugate gradients for solving linear systems
,
1952
.
[7]
H. Sherali,et al.
Conjugate gradient methods using quasi-Newton updates with inexact line searches
,
1990
.
[8]
Hector A. Rosales-Macedo.
Nonlinear Programming: Theory and Algorithms (2nd Edition)
,
1993
.
[9]
R. H. Myers,et al.
Response Surface Methodology: Process and Product Optimization Using Designed Experiments
,
1995
.