HAPpy Hour Seminar - Predicting the Future of Downscaled Climate Projections
3:00 – 4:00 pm MDT
In recent years, the market has been flooded with downscaled climate data for applications in risk quantification. For Global Climate Models (GCMs), such datasets are built using statistical, dynamical, or more recently machine learning-based (ML-based) techniques. Here, an ongoing ‘brute force’ exercise is presented that has seen 25 GCMs from the 6th Phase of the Coupled Model Intercomparison Project (CMIP6) dynamically downscaled across the western U.S. to a 9-km grid. Practical matters of GCM selection, data management issues, and general quality controls are discussed in addition to the actual downscaling itself. Biases and downscaled data quality issues are frankly detailed. We also show that a simple bias correction of the GCM outputs prior to downscaling yields high-resolution temperature and precipitation biases comparable to those where bias correction was only applied following downscaling. We also discuss the impacts of pre-downscaling bias correction on downscaled temperature trends and future uncertainties introduced by its application. Because continual advances in ML-based downscaling techniques require high-fidelity training/target data that can be used to bypass longstanding stationarity issues, encode highly nonlinear feedbacks, and otherwise consider relationships between physical variables that are critical for projecting their future changes, the development of such quality-controlled climate projections is timely.
Please direct questions/comments about this page to: