Multi-Directional Search: A Direct Search Algorithm for Parallel Machines by Virginia Joanne Torczon In recent years there has been a great deal of interest in the development of optimization algorithms which exploit the computational power of parallel computer architectures. We have developed a new direct search algorithm, which we call multidirectional search, that is ideally suited for parallel computation. Our algorithm belongs to the class of direct search methods, a class of optimization algorithms which neither compute nor approximate any derivatives of the objective function. Our work, in fact, was inspired by the simplex method of Spendley, Hext, and Himsworth, and the simplex method of Nelder and Mead. The multi-directional search algorithm is inherently parallel. The basic idea of the algorithm is to perform concurrent searches in multiple directions. These searches are free of any interdependencies, so the information required can be computed in parallel. A central result of our work is the convergence analysis for our algorithm. By requiring only that the function be continuously di erentiable over a bounded level set, we can prove that a subsequence of the points generated by the multi-directional search algorithm converges to a stationary point of the objective function. This is of great interest since we know of few convergence results for practical direct search algorithms. We also present numerical results indicating that the multi-directional search algorithm is robust, even in the presence of noise. Our results include comparisons with the Nelder-Mead simplex algorithm, the method of steepest descent, and a quasi-Newton method. One surprising conclusion of our numerical tests is that the Nelder-Mead simplex algorithm is not robust. We close with some comments about future directions of research. Acknowledgments First, I would like to thank my family. Without their unwavering support I would never have made it this far. If they ever doubted that a historian could also be a mathematician, they never let it show. I would like to thank the members of my committee; each one has made an important contribution to this work. Andy Boyd convinced rst me, and then my chairman, that the last argument needed to complete the convergence proof was correct; he then pushed me to a most satisfying generalization. Ken Kennedy shared his knowledge of parallel computation and pointed me towards a very clever load balancing scheme. Richard Tapia deserves special credit for ever agreeing to accept me into the graduate program. He is also responsible for introducing me to optimization theory. Of late, he has shared his knowledge of the theory for descent methods, which has helped shaped the nal form of my arguments. I have also enjoyed our many conversations on issues as diverse as music, minorities, and machismo. I owe special thanks to my chairman, John Dennis. He asked me to stay, gave me this problem to work on, and made me tough. I owe a great deal to his inimitable style of motivation: His unerring response to my latest result would be either \You're wrong." or \That can't possibly be true." | which of course made me all the more determined to prove that I was, in fact, right. Now he moans that I am always right, as if there were no connection. I would also like to thank the many other people at Rice from whom I have learned much: Richard Byrd helped isolate the last link needed to complete my convergence result. Linda Torczon and Keith Cooper have taught me much about computers and computation. Nancy Ginsburg shared the early, di cult years of graduate school. v Cathy Samuelsen took up where Nancy left o . Cathy and Karen Williamson su ered through numerous drafts of my thesis; it is better for their e orts. Finally, I would like to thank my husband, Michael Lewis. He was the rst, true believer in this work; I managed to convince him that I was right before I had even convinced myself. His faith in my mathematical abilities has pushed me further than anyone else thought possible. His good taste has left its mark on all this work.
[1]
H. H. Rosenbrock,et al.
An Automatic Method for Finding the Greatest or Least Value of a Function
,
1960,
Comput. J..
[2]
Samuel H. Brooks,et al.
Optimum Estimation of Gradient Direction in Steepest Ascent Experiments
,
1961
.
[3]
Robert Hooke,et al.
`` Direct Search'' Solution of Numerical and Statistical Problems
,
1961,
JACM.
[4]
G. R. Hext,et al.
Sequential Application of Simplex Designs in Optimisation and Evolutionary Operation
,
1962
.
[5]
M. J. D. Powell,et al.
An efficient method for finding the minimum of a function of several variables without calculating derivatives
,
1964,
Comput. J..
[6]
John A. Nelder,et al.
A Simplex Method for Function Minimization
,
1965,
Comput. J..
[7]
M. J. Box.
A Comparison of Several Current Optimization Methods, and the use of Transformations in Constrained Problems
,
1966,
Comput. J..
[8]
Willard I. Zangwill,et al.
Minimizing a function without calculating derivatives
,
1967,
Comput. J..
[9]
M. J. Box,et al.
Non-linear optimization techniques;
,
1969
.
[10]
James M. Ortega,et al.
Iterative solution of nonlinear equations in several variables
,
2014,
Computer science and applied mathematics.
[11]
V. L. Anderson.
Evolutionary Operation : A Method for Increasing Industrial Productivity
,
1970
.
[12]
Janusz S. Kowalik,et al.
Iterative methods for nonlinear optimization problems
,
1972
.
[13]
R. D. Krause,et al.
Use of the simplex method to optimize analytical conditions in clinical chemistry.
,
1974,
Clinical chemistry.
[14]
S. Deming,et al.
Simplex optimization of analytical chemical methods
,
1974
.
[15]
G. L. Ritter,et al.
Simplex pattern recognition
,
1975
.
[16]
M. B. Denton,et al.
Performance of the Super Modified Simplex
,
1977
.
[17]
俞文此.
POSITIVE BASIS AND A CLASS OF DIRECT SEARCH TECHNIQUES
,
1979
.
[18]
R. B. Spencer,et al.
High-speed algorithm for simplex optimization calculations
,
1979
.
[19]
D. Walmsley.
THE SIMPLEX METHOD FOR MINIMISATION OF A GENERAL FUNCTION
,
1981
.
[20]
Philip E. Gill,et al.
Practical optimization
,
1981
.
[21]
John E. Dennis,et al.
Numerical methods for unconstrained optimization and nonlinear equations
,
1983,
Prentice Hall series in computational mathematics.
[22]
Daniel J. Woods,et al.
Optimization on Microcomputers: The Nelder-Mead Simplex Algorithm
,
1985
.
[23]
Robert B. Schnabel,et al.
Concurrent Function Evaluations in Local and Global Optimization ; CU-CS-345-86
,
1987
.
[24]
D. Chen,et al.
A New Simplex Procedure For Function Minimization
,
1986
.
[25]
K. Burton,et al.
Optimisation via simplex
,
1987
.
[26]
Richard H. Byrd,et al.
Using parallel function evaluations to improve hessian approximation for unconstrained optimization
,
1988
.
[27]
Thomas F. Coleman,et al.
A parallel triangular solver for distributed-memory multiprocessor
,
1988
.
[28]
Thomas F. Coleman,et al.
A New Method for Solving Triangular Systems on Distributed Memory Message-Passing Multiprocessors
,
1989
.
[29]
Thomas F. Coleman,et al.
Solving Systems of Nonlinear Equations on a Message-Passing Multiprocessor
,
1990,
SIAM J. Sci. Comput..
[30]
C. H. Still.
Parallel Quasi-Newton Methods for Unconstrained Optimization
,
1990,
Proceedings of the Fifth Distributed Memory Computing Conference, 1990..