Forecast Evaluation and Applied Statistics

Providing meaningful information to researchers and operational end users of weather and climate models

The roots of applied statistics in RAL are in forecast verification, which is the process of determining the quality of forecasts. Statistical verification of forecasts is a critical component of their optimization. Improvements can be made by evaluating forecast products throughout the development process as deficiencies in the algorithms are discovered. Verification also benefits forecasters and end users by supplying them with objective data about the quality or accuracy of the forecasts, which can feed into decision processes (Brown, 1996).

Model Evaluation Tools (MET) is designed to be a highly-configurable, state-of-the-art suite of verification tools.
Model Evaluation Tools (MET) is designed to be a highly-configurable, state-of-the-art suite of verification tools.

The JNTP verification and statistics team provide improved verification approaches and tools that provide more meaningful and relevant information about forecast performance. [n1] The focus of this effort is on diagnostic, statistically valid approaches, including feature–based evaluation of precipitation and convective forecasts, as well as hypothesis testing that can account for various forms of dependence as well as location/timing errors. In addition, JNTP[n2]  develops forecast evaluation tools and training on verification methods that are available for use by members of the operational, model development, and research communities.

Verifying predictions on different spatial and temporal scales presents different challenges.  Our group works with model developers and end-users to address issues such as: 1) displacement errors for storm-scale and meso-scale simulations; 2) forecast consistency of any scale NWP simulations; 3) limits of predictability of subseasonal-to-seasonal predictions; 4) time-agnostic pattern prediction for decadal climate prediction; 5) skill at predicting extreme (i.e. rarely occurring) events; and 6) skill at predicting events over minutes to days[n1] .  Our expertise spans evaluating a wide variety of predictions, including: gridded weather and climate fields (both deterministic and probabilistic), tropical cyclone tracks and intensity, renewable energy prediction, and developing plans for systematic testing.

The wide variety of projects and activities undertaken by our group include: extreme value statistical analysis applied to weather and climate data, development and testing of spatial forecast verification techniques, support of systematic testing and evaluation (T&E) activities within RAL, and the development of a state-of-the-art suite of software tools for performing forecast verification. These tools are free to download and include the Model Evaluation Tools (MET) and several packages for R-Statistics (e.g. distillery, extRemes, ismev, smoothie, SpatialVx, and verification).

Active research:

  • Spatial methods
  • Forecast consistency
  • Extremes
  • Verification of remotely sensed fields
  • Process-oriented/diagnostic verification
  • Representation of observation uncertainty

Representative Projects

  • MesoVict: The spatial forecast verification Inter-Comparison Project (ICP) and follow-on MesoVICT project sifted through the maze of newly proposed methods for verifying primarily high-resolution forecasts against analysis products on the same regular grid, as well as to help forge a consistent set of cases to test methods that may be proposed in the future so as to have a consistent set of results that can be cross compared.


Please direct questions/comments about this page to:

Tara Jensen

Proj Mgr II