Defining Coverage of a Domain Using a Modified Nearest-Neighbor Metric

Validation experiments are conducted at discrete settings within the domain of interest to assess the predictive maturity of a model over the entire domain. Satisfactory model performance merely at these discrete tested settings is insufficient to ensure that the model will perform well throughout the domain, particularly at settings far from validation experiments. The goal of coverage metrics is to reveal how well a set of validation experiments represents the entire operational domain. The authors identify the criteria of an exemplary coverage metric, evaluate the ability of existing coverage metrics to fulfill each criterion, and propose a new, improved coverage metric. The proposed metric favors interpolation over extrapolation through a penalty function, causing the metric to prefer a design of validation experiments near the boundaries of the domain, while simultaneously exploring inside the domain. Furthermore, the proposed metric allows the coverage to account for uncertainty associated with validation experiments. Application of the proposed coverage metric on a practical, non-trivial problem is demonstrated on the Viscoplastic Self-Consistent material plasticity code for 5182 aluminum alloy.

[1]  Brian J. Williams,et al.  Assessing the Predictive Capability of the LIFEIV Nuclear Fuel Performance Code using Sequential Calibration , 2012 .

[2]  Thomas J. Santner,et al.  The Design and Analysis of Computer Experiments , 2003, Springer Series in Statistics.

[3]  François M. Hemez,et al.  Predictive Maturity of Computer Models Using Functional and Multivariate Output , 2008 .

[4]  R. A. Lebensohn,et al.  A Resource Allocation Framework for Experiment-Based Validation of Numerical Models , 2015 .

[5]  François M. Hemez,et al.  A forecasting metric for predictive modeling , 2011 .

[6]  C. S. Hartley,et al.  Modeling the mechanical response of polycrystals deforming by climb and glide , 2010 .

[7]  D. Higdon,et al.  Computer Model Calibration Using High-Dimensional Output , 2008 .

[8]  R J Fryer,et al.  Models of Codend Size Selection , 1996 .

[9]  P. James McLellan,et al.  Design of Optimal Sequential Experiments to Improve Model Predictions from a Polyethylene Molecular Weight Distribution Model , 2010 .

[10]  Timothy G. Trucano,et al.  Predictive Capability Maturity Model for computational modeling and simulation. , 2007 .

[11]  Joshua L. Hegenderfer,et al.  Resource Allocation Framework: Validation of Numerical Models of Complex Engineering Systems against Physical Experiments , 2012 .

[12]  M. E. Johnson,et al.  Minimax and maximin distance designs , 1990 .

[13]  David Draper,et al.  Assessment and Propagation of Model Uncertainty , 2011 .

[14]  Jason L. Loeppky,et al.  Batch sequential design to achieve predictive maturity with calibrated computer models , 2011, Reliab. Eng. Syst. Saf..

[15]  Brian Williams,et al.  A Bayesian calibration approach to the thermal problem , 2008 .

[16]  François M. Hemez,et al.  Defining predictive maturity for validated numerical simulations , 2010 .

[17]  Roger W. Logan,et al.  Risk Reduction as the Product of Model Assessed Reliability, Confidence, and Consequence , 2004 .