In computational physics and engineering, numerical models are developed to predict the behavior of a system whose response cannot be measured experimentally. A key aspect of science-based predictive modeling is the assessment of prediction credibility. Credibility, which is demonstrated through the activities of Verification and Validation, quantifies the extent to which simulation results can be analyzed with confidence to represent the phenomenon of interest with accuracy consistent with the intended use of the model. This paper argues that assessing the credibility of a mathematical or numerical model must combine three components: 1) Improving the fidelity to test data; 2) Studying the robustness of prediction-based decisions to variability, uncertainty, and lack-of-knowledge; and 3) Establishing the expected prediction accuracy of the models in situations where test measurements are not available. A recently published Theorem that demonstrates the irrevocable trade-offs between “The Good, The Bad, and The Ugly,” or robustness-to-uncertainty, fidelity-to-data, and confidence-in-prediction, is summarized. The main implication is that high-fidelity models cannot, at the same time, be made robust to uncertainty and lack-of-knowledge. Similarly, equally robust models do not provide consistent predictions, hence reducing confidence-in-prediction. The conclusion of the theoretical investigation is that, in assessing the predictive accuracy of numerical models, one should never focus on a single aspect. Instead, the trade-offs between fidelity-to-data, robustness-to-uncertainty, and confidence-in-prediction should be explored. The discussion is illustrated with an engineering application that consists in modeling and predicting the propagation of an impact through a layer of hyper-foam material. A novel definition of sensitivity coefficients is suggested from the slopes of robustness-to-uncertainty curves. Such definition makes it possible to define the sensitivity of a performance metric to arbitrary uncertainty, whether it is represented with probability laws or any other information theory. This publication has been approved for unlimited, public release on November 18, 2003 (LA-UR-03-8492, Unclassified).
[1]
Yakov Ben-Haim,et al.
Robust rationality and decisions under severe uncertainty
,
2000,
J. Frankl. Inst..
[2]
François M. Hemez,et al.
Design of Computer Experiments for Improving an Impact Test Simulation
,
2000
.
[3]
Scott W. Doebling,et al.
STRUCTURAL DYNAMICS MODEL VALIDATION: PUSHING THE ENVELOPE
,
2002
.
[4]
François M. Hemez,et al.
Robustness, Fidelity and Prediction-Looseness of Models
,
2004
.
[5]
John E. Mottershead,et al.
Model Updating In Structural Dynamics: A Survey
,
1993
.
[6]
Lesley F. Wright,et al.
Information Gap Decision Theory: Decisions under Severe Uncertainty
,
2004
.
[7]
Yakov Ben-Haim,et al.
Information-gap robustness for the test analysis correlation of nonlinear transient simulation
,
2002
.
[8]
F. Hemez,et al.
Title : Inversion of Structural Dynamics Simulations : State-ofthe-art and Orientations of the Research
,
2000
.