Abstract Generally accepted standards for testing and validating ecosystem models would benefit both modellers and model users. Universally applicable test procedures are difficult to prescribe, given the diversity of modelling approaches and the many uses for models. However, the generally accepted scientific principles of documentation and disclosure provide a useful framework for devising general standards for model evaluation. Adequately documenting model tests requires explicit performance criteria, and explicit benchmarks against which model performance is compared. A model's validity, reliability, and accuracy can be most meaningfully judged by explicit comparison against the available alternatives. In contrast, current practice is often characterized by vague, subjective claims that model predictions show ‘acceptable’ agreement with data; such claims provide little basis for choosing among alternative models. Strict model tests (those that invalid models are unlikely to pass) are the only ones capable of convincing rational skeptics that a model is probably valid. However, ‘false positive’ rates as low as 10% can substantially erode the power of validation tests, making them insufficiently strict to convince rational skeptics. Validation tests are often undermined by excessive parameter calibration and overuse of ad hoc model features. Tests are often also divorced from the conditions under which a model will be used, particularly when it is designed to forecast beyond the range of historical experience. In such situations, data from laboratory and field manipulation experiments can provide particularly effective tests, because one can create experimental conditions quite different from historical data, and because experimental data can provide a more precisely defined ‘target’ for the model to hit. We present a simple demonstration showing that the two most common methods for comparing model predictions to environmental time series (plotting model time series against data time series, and plotting predicted versus observed values) have little diagnostic power. We propose that it may be more useful to statistically extract the relationships of primary interest from the time series, and test the model directly against them.
[1]
James W. Kirchner,et al.
Predicted response of stream chemistry to acid loading tested in Canadian catchments
,
1992,
Nature.
[2]
K. C. Pillai,et al.
Leachability of radium from uranium mill tailings
,
1985
.
[3]
A. Jakeman,et al.
How much complexity is warranted in a rainfall‐runoff model?
,
1993
.
[4]
Robert A. Goldstein,et al.
The ILWAS model: Formulation and application
,
1985,
Water, Air, and Soil Pollution.
[5]
Edwin T. Jaynes,et al.
Inference, Method, and Decision: Towards a Bayesian Philosophy of Science.
,
1979
.
[6]
R. Skeffington,et al.
Testing a catchment acidification model: ‘MAGIC’ applied to a 5 year lysimeter experiment
,
1993
.
[7]
John D. Bredehoeft,et al.
Ground-water models cannot be validated
,
1992
.
[8]
R. Wright,et al.
Reversibility of acidification shown by whole-catchment experiments
,
1988,
Nature.
[9]
Nils Christophersen,et al.
A Model for Streamwater Chemistry at Birkenes, Norway
,
1982
.
[10]
C. Howson,et al.
Scientific Reasoning: The Bayesian Approach
,
1989
.
[11]
N Oreskes,et al.
Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences
,
1994,
Science.
[12]
James W. Kirchner,et al.
Separating hydrological and geochemical influences on runoff acidification in spatially heterogeneous catchments
,
1993
.
[13]
Richard P. Hooper,et al.
Assessing the Birkenes Model of stream acidification using a multisignal calibration methodology
,
1988
.
[14]
B. Cosby,et al.
Evaluation of an acidification model with data from manipulated catchments in Norway
,
1990,
Nature.