A framework for assessing confidence in computational predictions

This article is the third in a series of papers concerning the importance of simulation code validation to the US Department of Energy Accelerated Strategic Computing Initiative (ASCI) program [1]. The series started with a review by John Garcia of the critical need for advanced validation techniques in the ASCI program, which was created to make up for the absence of nuclear testing through the use of simulation codes. Without testing, the simulation codes must be able to answer critical questions about the reliability of our aging stockpile of weapons. In the second paper, Bill Oberkampf gave an overview of validation concepts and described the requirements for a well-executed validation experiment. In this article we discuss the analysis of data obtained from validation experiments and motivate the use of uncertainties to quantify the accuracy of predictions made by simulation codes. This work represents merely a small fraction of the numerous verification and validation projects currently being conducted at the DOE National Laboratories and at several universities under the auspices of the ASCI program. Engineers routinely use simulation codes to analyze and design critical structures and devices. Because public safety is often involved, confidence in the predictions made by simulation codes is clearly of paramount interest. An engineer needs to be confident that when used in an appropriate way, a simulation code will predict the behavior of the system under study to a specified degree of accuracy. The goal in validating a simulation code is to determine the degree to which the output of the code agrees with the actual behavior of a physical system in a specified situation. Because the criterion is real-world behavior, validation must involve comparison of the simulation code’s output to experimental results. Uncertainties in a quantity are described in terms of a probability density function (pdf) that specifies the probability of all possible values of that quantify. In this context, probability is used as the quantitative measure of our degree of belief, which summarizes our knowledge about a particular situation [2]. We use the Monte Carlo technique to make this kind of probabilistic analysis more tangible as well as to obtain quantitative estimates of uncertainties. We will discuss the analysis of validation experiments, the role of uncertainties in material models, and in experimental conditions. From the viewpoint of uncertainties, inference about how well a simulation code can predict physical phenomena is limited not only by the uncertainties in the relevant measurements, but also by how well the conditions of the