Adaptive and parallel capabilities in the Multipoint Approximation Method

n the present work the Multipoint Approximation Method (MAM) has been enhanced with new capabilities that allow to solve large scale design optimization problems more efficiently. The first feature is adaptive building of approximate models during the optimization search. And the second feature is a parallel implementation of MAM. A traditional approach to adaptive building of metamodels is to check several types for their quality on a set of design points and select the best type. The technique presented in this paper is based on the assembly of multiple metamodels into one model using linear regression. The obtained coefficients of the model assembly are not weights of the individual models but regression coefficients determined by the least squares minimization method. The enhancements were implemented within Multipoint Approximation Method (MAM) method related to mid-range approximation framework. The developed technique has been tested on several benchmark problems. II. Outline of Multipoint Approximation Method (MAM) This technique (Toropov et al., 1993) replaces the original optimization problem by a succession of simpler mathematical programming problems. The functions in each iteration present mid-range approximations to the corresponding original functions. These functions are noise-free. The solution of an individual sub-problem becomes the starting point for the next step, the move limits are changed and the optimization is repeated iteratively until the optimum is reached. Each approximation function is defined as a function of design variables as well as a number of tuning parameters. The latter are determined by the weighted least squares surface fitting using the original function values (and their derivatives, when available) at several sampling points of the design variable space. Some of the sampling points are generated in the trust region, and the rest is taken from the extended trust region, i.e. the pool of points considered in the previous iterations (van Keulen et al., 1997). A general optimization problem can be formulated as