Computer experiments: multiobjective optimization and sensitivity analysis

Computer experiments have emerged as a popular tool for studying the relationship between a response variable and factors that affect the response when a computational model of this relationship is available. They have proven to be particularly useful in applications where properly designed physical experiments are infeasible. This thesis considers two problems that occur in the design and analysis of computer experiments. The first problem is approximation of the Pareto front and set in a multiple-output computer experiment. The second problem extends the calculation of sensitivity indices of input factors to a broader class of models than have been studied previously. To solve the first problem, several new design criteria for approximating the Pareto front are developed. The resulting sequential designs generalize the well-known expected improvement approach for optimization of a single-objective function. The new methods are compared to previously proposed expected improvement generalizations for multiobjective optimization from the literature. The comparisons are based on both theoretical considerations and empirical results from using the sequential design criteria on several test functions and engineering applications. In the sensitivity index problem, formulas are derived for calculating empirical Bayesian estimates of sensitivity indices for Gaussian process models with an arbitrary polynomial mean structure and three parametric correlation families. The use of a polynomial mean has the potential to provide more accurate estimates of sensitivity indices when the computer output has a large-scale polynomial trend. Additionally, when combined with a compactly supported correlation function and parameter space restrictions that force a particular degree of sparsity on the correlation matrix at the design, the polynomial mean assumption allows one to estimate sensitivity indices for computer experiments with a large number of runs. In such large-design applications, estimates based on the standard constant mean Gaussian processes with a power exponential correlation function can be computationally infeasible. Examples are presented that exhibit the accuracy of the estimates under these nonstandard modeling assumptions.