The Model Evaluation Tools (MET): Community tools for forecast evaluation [presentation]

Assessments of forecast quality are a critical component of the forecast development, improvement, and application processes. While some verification capabilities have been used in practice for many years, modern, state-of-the-art tools are especially needed to provide meaningful evaluations of high-resolution numerical weather prediction (NWP) model forecasts. The Model Evaluation Tools (MET) verification package has been developed to provide this capability and to aid the Developmental Testbed Center (DTC) in testing and evaluation of the Weather Research and Forecasting (WRF) model. MET is a community developed package that is freely available and distributed through the DTC. The package includes numerous statistical tools for forecast evaluation, including traditional measures for categorical and continuous variables [e.g., Root-mean squared error (RMSE), Critical Success Index (CSI)]. In addition, MET provides advanced spatial forecast evaluation techniques. Two general categories of spatial methods are currently included in MET: (i) “object-based” and (ii) “neighborhood” techniques. An upcoming release will also include wavelet techniques that are able to estimate forecast performance capabilities at different scales.To account for the uncertainty associated with estimation of traditional verification measures, methods for estimating confidence intervals for the verification statistics are an integral part of MET. This paper describes MET development, capabilities, and future plans.