Measuring Forecast Quality

Measuring Forecast Quality

The process of forecast verification/evaluation is often an unseen partner in engineering and scientific advances. Verification provides systematic and objective evaluation of the quality (or performance) of a forecasting system. In turn, this process allows fair comparisons to be made between forecasting systems. Diagnostics based on verification results provide information about a model’s strengths and weaknesses, and often have an impact on the direction of model improvements, depending on the focus of the evaluation (e.g., extreme precipitation vs. 500-mb height). In addition, forecast verification can help forecasters learn to improve their forecasts, and it provides information to users to aid in decision making.

To ensure credibility, every activity focused on improving forecasting systems or on the provision of forecasts to users should have an associated verification activity to monitor the performance of the forecasting system and to assess the improvements in capabilities that are achieved. Comprehensive evaluations can lead to improvements in forecasting capabilities and the services provided to forecast users.

RAL develops improved verification approaches and tools that provide more meaningful and relevant information about forecast performance. The focus of this effort is on diagnostic, statistically valid approaches, including feature–based evaluation of precipitation and convective forecasts, distribution–based approaches that can provide more meaningful information (for forecast developers as well as forecast users) about forecast performance. In addition, RAL develops forecast evaluation tools and training on verification methods that are available for use by members of the operational, model development, and research communities.

Contact

Michael Ek

Director, Joint Numerical Testbed

email

Tara Jensen

Proj Mgr II

email