Improving the Statistical Representation of a Modeler's Prior Knowledge to Speed the Evaluation of Model Uncertainty
暂无分享,去创建一个
Computational hydrologic, hydraulic, and sediment-transport models are used to forecast future conditions and to analyze the impacts of potential changes to water systems. Unfortunately, the results of these models contain uncertainty due to uncertainty in the model inputs, the values of the model parameters, and the mathematical representation of the system. Several methods such as Generalized Likelihood Uncertainty Estimation (GLUE) and Shuffled Complex Evolution Metropolis - Uncertainty Estimation (SCEM-UA) are available to assess parameter uncertainty. However, these methods require too many model simulations to be used with hydraulic and sediment-transport models that require significant computation time for each simulation. This study explores a new approach to make the uncertainty methods quicker by better representing the knowledge that the modeler has prior to the uncertainty estimation. Both GLUE and SCEM-UA assume that the modeler has no prior knowledge aside from the feasible bounds for the parameter distributions, but an experienced modeler can predict in advance the parameters that will have more and less effect on the results and even realistic values for the more important parameters. Also, some models may be calibrated manually before the uncertainty estimation is performed. To exploit these advantages, several modifications are proposed for the uncertainty estimation methods. First, the model is manually calibrated to find best parameter values for the given case. Then, a sensitivity analysis is conducted to identify the parameters that contribute the most to output variability. Parameters that introduce much variation are treated as uncertain, while parameters with little effect are treated as certain. For the uncertain parameters, the prior distributions are then defined based on the modeler's prior knowledge of the parameter values using beta distributions. The revised algorithm is tested by applying it to simple test models, and its performance is evaluated by comparing the number of required simulations and the estimates of forecast uncertainty to the existing uncertainty estimation methods.