MoVars: Multidisciplinary Optimization Via Adaptive Response Surfaces

An emerging need in industry is to do simulation based designs with several hundred design variables. Our current approach, as implemented in Design Explorer, is not practical for problems of this size. This paper explains these limitations and presents a new approach that allows us to overcome them. Some of the issues associated with using these codes have been attacked, while others remain open. This paper addresses the issues that arise when the design problem has a large number of variables. We call our approach MoVars for \more variables" or for Multidisciplinary Optimization Via Adaptive Response Surfaces. Often these simulations have long runtimes, do not compute derivatives, and are not suciently smooth to work well with standard gradient based methods. Many of the obstacles to using these codes have been over come with automation and using alternative optimization methods. SEQOPT, sequential modelling and optimization, 9 has been very eective. It is part of Design Explorer, a suite of tools for design space exploration and optimization. However when the number of variables gets large, more than 100, the solution process in SEQOPT becomes impractical for several reasons. These include Experiments The typical number of simulation runs suggested by Design Explorer for the initial experi- ments grows with the square of the number of variables, and becomes impractical even with today's and tomorrow's large scale computers. Building models Even if the simulations could be run the number of times needed to build a model, the cost of building a kriging model like the ones used in SEQOPT is prohibitive. Building a model involves solving a global optimization problem whose dimension is related to the number of variables and the number of sites in the experiment.

[1]  C. G. Broyden A Class of Methods for Solving Nonlinear Simultaneous Equations , 1965 .

[2]  Charles Audet,et al.  A surrogate-model-based method for constrained optimization , 2000 .

[3]  Michael C. Ferris,et al.  Parallel Variable Distribution , 1994, SIAM J. Optim..

[4]  C. G. Broyden,et al.  The convergence of an algorithm for solving sparse nonlinear systems , 1971 .

[5]  Jorge J. Moré,et al.  Testing Unconstrained Optimization Software , 1981, TOMS.

[6]  Evin J. Cramer,et al.  Effective Parallel Optimization of Complex Computer Simulations , 2004 .

[7]  K. Brown A Quadratically Convergent Newton-Like Method Based Upon Gaussian-Elimination , 1968 .

[8]  Charles Audet,et al.  A Pattern Search Filter Method for Nonlinear Programming without Derivatives , 2001, SIAM J. Optim..

[9]  Sven Leyffer,et al.  On the Global Convergence of a Filter--SQP Algorithm , 2002, SIAM J. Optim..

[10]  J. Sobieszczanski-Sobieski,et al.  Bilevel Integrated System Synthesis for Concurrent and Distributed Processing , 2002 .

[11]  C. D. Perttunen,et al.  Lipschitzian optimization without the Lipschitz constant , 1993 .

[12]  C. Currin,et al.  A Bayesian Approach to the Design and Analysis of Computer Experiments , 1988 .

[13]  Michel Cosnard,et al.  Numerical Solution of Nonlinear Equations , 1979, TOMS.

[14]  Ilan Kroo,et al.  Collaborative optimization using response surface estimation , 1998 .

[15]  A. J. Booker,et al.  A rigorous framework for optimization of expensive functions by surrogates , 1998 .

[16]  Max D. Morris,et al.  Factorial sampling plans for preliminary computational experiments , 1991 .

[17]  Thomas J. Santner,et al.  The Design and Analysis of Computer Experiments , 2003, Springer Series in Statistics.