Forecast Evaluation
Providing meaningful information to researchers and operational end users of weather and climate models
Forecast Evaluation
Providing meaningful information to researchers and operational end users of weather and climate models
Test and Evaluation
The JNT makes forecast system evaluation operationally relevant by executing rigorous end-to-end tests on many forecasts spanning multiple seasons. The cases selected for these retrospective tests encapsulate a broad range of weather regimes ranging from quiescent to strong flows. The exact periods chosen vary based on the type of phenomenon that is the focus of the test. For some test activities, these cases will be chosen from all four seasons (e.g., extra–tropical for general predictions), whereas for others the cases will come from a particular season (e.g., hurricane season, convective season). The JNT’s evaluation of these retrospective forecasts includes standard verification techniques, as well as new verification techniques when appropriate.
By conducting carefully controlled testing, including the generation of objective verification statistics, the JNT is able to provide the operational community with guidance for selecting new NWP technologies with potential value for operational implementation. JNT testing also provides the research community with baselines against which the impacts of new techniques can be evaluated. The statistical results may also aid researchers in selecting model configurations to use for their projects.
Verification and Applied Statistics
Statistical verification of forecasts is a critical component of their development. Verification also benefits forecasters and end users by supplying them with objective data about the quality or accuracy of the forecasts, which can feed into decision processes.
The JNT is focused on developing statistically meaningful advanced verification tools for assessment and comparison of the forecasts performance. JNT staff support international verification efforts, and advanced statistics including extreme value theory. The JNT also provides statistical support for other projects in RAL.
The JNT continually develops, updates, and supports the community state-of-the-science verification package called the Model Evaluation Tools (MET), which contains novel verification methods including the Method for Object-based Diagnostic Evaluation (MODE).
Forecast Evaluation
The roots of applied statistics in RAL are in forecast verification, which is the process of determining the quality of forecasts. Statistical verification of forecasts is a critical component of their optimization. Improvements can be made by evaluating forecast products throughout the development process as deficiencies in the algorithms are discovered. Verification also benefits forecasters and end users by supplying them with objective data about the quality or accuracy of the forecasts, which can feed into decision processes (Brown, 1996).
The JNTP verification and statistics team provide improved verification approaches and tools that provide more meaningful and relevant information about forecast performance. [n1] The focus of this effort is on diagnostic, statistically valid approaches, including feature–based evaluation of precipitation and convective forecasts, as well as hypothesis testing that can account for various forms of dependence as well as location/timing errors. In addition, JNTP[n2] develops forecast evaluation tools and training on verification methods that are available for use by members of the operational, model development, and research communities.
Verifying predictions on different spatial and temporal scales presents different challenges. Our group works with model developers and end-users to address issues such as: 1) displacement errors for storm-scale and meso-scale simulations; 2) forecast consistency of any scale NWP simulations; 3) limits of predictability of subseasonal-to-seasonal predictions; 4) time-agnostic pattern prediction for decadal climate prediction; 5) skill at predicting extreme (i.e. rarely occurring) events; and 6) skill at predicting events over minutes to days[n1] . Our expertise spans evaluating a wide variety of predictions, including: gridded weather and climate fields (both deterministic and probabilistic), tropical cyclone tracks and intensity, renewable energy prediction, and developing plans for systematic testing.
The wide variety of projects and activities undertaken by our group include: extreme value statistical analysis applied to weather and climate data, development and testing of spatial forecast verification techniques, support of systematic testing and evaluation (T&E) activities within RAL, and the development of a state-of-the-art suite of software tools for performing forecast verification. These tools are free to download and include the Model Evaluation Tools (MET) and several packages for R-Statistics (e.g. distillery, extRemes, ismev, smoothie, SpatialVx, and verification).
Active research:
- Spatial methods
- Forecast consistency
- Extremes
- Verification of remotely sensed fields
- Process-oriented/diagnostic verification
- Representation of observation uncertainty
Representative Projects
- MesoVict: The spatial forecast verification Inter-Comparison Project (ICP) and follow-on MesoVICT project sifted through the maze of newly proposed methods for verifying primarily high-resolution forecasts against analysis products on the same regular grid, as well as to help forge a consistent set of cases to test methods that may be proposed in the future so as to have a consistent set of results that can be cross compared.
Contact
Tara Jensen
Proj Mgr II