Lesson 2 - Running WRF-Hydro

Overview

In this lesson, we will cover the basics of constructing and running a WRF-Hydro simulation using a prepared domain for the 'Gridded' configuration. For a detailed description of model configurations see the Technical Description.

Software and conventions

The easiest way to run these lessons is via the wrfhydro/training Docker container, which has all software dependencies and data pre-installed.

For a complete description of the software environment used for this training please see Getting started.

You may either execute commands by running each cell of this notebook. Alternatively, you may open a terminal in Jupyter Lab by selecting New -> Terminal in your Home tab of Jupyter Lab and input the commands manually if you prefer. You can also use your own terminal by logging into the container with the command docker exec -it wrf-hydro-training bash

All paths used in this lesson assume that the lesson materials are located under your home directory in a folder named wrf-hydro-training. If your materials are located in another directory, you will not be able to run the commands in this notebook inside Jupyter and will need to type them manually in your terminal session.

Constructing a simulation with a prepared domain

In this section we will describe the primary files needed to run a WRF-Hydro simulation.

A WRF-Hydro simulation consists of the following major components:

  • executable/binary
  • parameter files
  • domain files
  • forcing files
  • namelists

We will only cover basic descriptions of these elements in this lesson, for a detailed description see the Technical Description.

The WRF-Hydro model code, compiled executable/binary, and associated parameter table files were described in Lesson 1. If you have not already completed Lesson 1, please stop and do so now

Model run-time options are specified in two namelist files hydro.namelist and namelist.hrldas. These namelist files contain file path specifications, simulation duration, physics options, and output file selections, among others. We will cover many of these options in more depth in Lessons 4 and 5.

Orientation to the example_case

This lesson will use a prepared domain located in the ~/wrf-hydro-training/example_case directory. The structure of the example_case directory serves as a good example of how to organize your domain files. If using another domain with this lesson, such as one you may have created in the Geospatial processing tutorial, it is imperative that the file names and directory structure match those described below.

If using an official WRF-Hydro training example case, there will be a study area map and a Readme.txt file that describes the geographic setting, the directory, and the files.

First lets take a look at the map of the study area (map.png). map.PNG

Now, lets view the Readme.txt included with the domain for a brief description of the example case and its contents.

In [1]:
%%bash
cat ~/wrf-hydro-training/example_case/Readme.txt
#Overview 
This test case includes prepared geospatial data and input files for a
sample domain (region of interest) and prepared forcing data. This domain is 
a small region (13km x 11km) northwest of San Jose, Costa Rica including the Rio
Segundo and the Rio Desague. The simulation begins with a restart from a spinup 
period extending from 2014-01-01 to 2017-11-11. The forcing data prepared for this 
test case is 3 hourly data from the NASA Global Land Data Assimilation System 
(GLDAS) v2.1. The domain files included are for the gridded configuration of the 
WRF-Hydro model. See the WRF-Hydro V5 Technical Description located at 
https://ral.ucar.edu/projects/wrf_hydro for a more detailed description of model 
physics options, configurations, and input files. Some non-standard files will be 
described below.

#Directory contents example_case: directory containing all input files for the
Rio Desague example test case     
	|     
	-FORCING: Directory containing all GLDAS HLRDAS formatted 3 hourly 
	forcing data for the simulation period.
	-Gridded: Directory containing all files required for the gridded routing
	configuration with lakes included        
		|
		-DOMAIN: Directory containing all geospatial data and input files for
		the Gridded routing option.
		-RESTART: Directory containing model restart files for the Gridded 
		routing option.       
		-hydro.namelist: Fortran namelist file for the hydro model.               
		-namelist.hrldas: Fortran namelist file for the Noah-MP land surface 
		model.
		-obs_nov_lesson_formatted.csv: CSV containing observational data 
	-Supplemental: Directory containing supplemental files used to create the example 
	case, including forcing files used for spinup
		|
		-supplemental_forcing.tar.gz: Tar ball containing additional forcing 
		data for spinup
		-namelist.wps: Reduced namelist.wps used to create the geogrid file using
		the WRF-Hydro geogrid Docker utilitiy. 
		-supplemental_precip_cmorph.tar.gz: Supplemental regridded precipitation data
                from CMORPH.
                -supplemental_precip_gpm.tar.gz: Supplemental regridded precipitation data
                from GPM.
	-map.png: Study area map

Now lets take a look at the actual directory

In [2]:
%%bash
ls ~/wrf-hydro-training/example_case
FORCING
Gridded
Readme.txt
obs_event.csv
supplemental

In this example case we have created 3 routing configurations for WRF-Hydro, National Water Model (NWM), Gridded, and Reach. For detailed descriptions of routing and physics configurations see the Technical Description. These three routing configurations have different geospatial data associated with them, and thus are represented by 3 separate directories in the ~/wrf-hydro-training/example_case directory. Additionally, you will notice a fourth directory Gridded_no_lakes. When running the Gridded configuration, some modifications to the geospatial data are needed to remove lakes from the domain. Therefore, domain files for the Gridded configuration without lakes are located in a separate gridded_no_lakes directory.

FORCING

This directory contains all the forcing data for our simulation. Note that there is only one FORCING directory. The same forcing data can be used with all three configurations.

The Gridded configuration directory

For this lesson, we will be running the Gridded configuration. Now we will explore the ~/wrf-hydro-training/example_case/Gridded directory.

In [3]:
%%bash
ls ~/wrf-hydro-training/example_case/Gridded
DOMAIN
RESTART
hydro.namelist
namelist.hrldas
namelist.hrldas.supp

The contents of this directory are described briefly in the Readme.txt file that we viewed earlier, but we will discuss them again here. For a more detailed description of these files beyond what will be described below, see the Technical Description

DOMAIN: Directory containing all geospatial data and input files for the Gridded routing option with lakes included

In [4]:
%%bash
ls ~/wrf-hydro-training/example_case/Gridded/DOMAIN
Fulldom_hires.nc
GEOGRID_LDASOUT_Spatial_Metadata.nc
GWBASINS.nc
GWBUCKPARM.nc
geo_em.d01.nc
hydro2dtbl.nc
soil_properties.nc
wrfinput_d01.nc
Filename Description Source Required
Fulldom_hires.nc High resolution full domain file. Includes all fields specified on the routing grid. WRF-Hydro GIS pre-processing toolkit Yes
GWBASINS.nc 2D file defining the locations of groundwater basins on a grid WRF-Hydro GIS pre-processing toolkit When the baseflow bucket model is turned on and user defined mapping is off
GWBUCKPARM.nc Groundwater parameter table containing bucket model parameters for each basin WRF-Hydro GIS pre-processing toolkit When the baseflow bucket model is turned on
LAKEPARM.nc Lake parameter table containing lake model parameters for each catchment WRF-Hydro GIS pre-processing toolkit When lake and reservoir routing is turned on
hydro2dtbl.nc Spatially distributed parameter table for lateral flow routing within WRF-Hydro. create_SoilProperties.R script (will also be automatically generated by WRF-Hydro) When using spatially distributed terrain routing parameters
geo_em_d01.nc The data required to define the domain and geospatial attributes of a spatially-distributed, or gridded, 1-dimensional (vertical) land surface model (LSM) GEOGRID utility in the WRF preprocessing system (WPS) Yes
wrfinput_d01.nc file including all necessary fields for the Noah-MP land surface model, but with spatially uniform initial conditions. Users should be aware that the model will likely require additional spin-up time when initialized from this file. create_Wrfinput.R script Yes
soil_properties.nc Spatially distributed land surface model parameters create_SoilProperties.R script If SPATIAL_SOIL compile-time option set to 1
GEOGRID_LDASOUT_Spatial_Metadata.nc projection and coordinate information for the land surface model grid. WRF-Hydro GIS pre-processing toolkit NO, but allows for CF compliant outputs
lake_shapes/ supplimental shape files that define lakes arcGIS NO

RESTART: Directory containing model restart files.

In [5]:
%%bash
ls ~/wrf-hydro-training/example_case/Gridded/RESTART
HYDRO_RST.2017-11-11_00:00_DOMAIN1
RESTART.2017111100_DOMAIN1

Restart files are an essential part of the WRF-Hydro modeling system. They are output on a fixed timestep specified by the user in the namelist.hrldas and hydro.namelist files, and represent a complete 'snapshot' of the model state at that time. These files can be used to restart a WRF-Hydro simulation from where the previous simulation terminated with all the model states intact.

When running a WRF-Hydro simulation, you may start your simulation with default initial conditions, referred to as a 'cold start'. When starting from a cold start, a model spinup period is needed to move the model state away from the default initial conditions to a more realistic, physically-based model state. Model output from the spinup period is generally not used for interpretation.

Restart files output at the end the spinup period can be used as the initial conditions for subsequent simulations, referred to as a 'warm start'. Simulations that start from a 'warm start' are generally the primary target for interpretation. 'Warm' and 'cold' starting the model will be discussed more in Lesson 4.

Namelists

Namelists are another key component of the WRF-Hydro modeling system and are the primary means of specifying inputs, outputs, and run-time options. There are two namelist files used by WRF-Hydro: hydro.namelist for the routing and hydrologic model and namelist.hrldas for the land-surface model. NOTE: These filenames are hard-coded into the model and can not be changed.

For all official WRF-Hydro domains, namelists will be included with each of the three domain configurations. This is done so that a user can easily run each configuration with minimal setup, and they serve as a starting point for users to specify their own namelists for a given configuration.

There are different namelists for each configuration because certain namelist options are specific to the domain configuration used. We will cover the various namelist options in detail in Lesson 4, but for now we will simply view the namelist and use the default namelist provided with the Gridded configuration to construct our simulation.

Take a moment and read through the two namelists below and note how all filepaths are relative to the current directory containing the namelist. This is the recommended way of specifying filepaths.

In [6]:
%%bash
cat ~/wrf-hydro-training/example_case/Gridded/hydro.namelist
&HYDRO_nlist
!!!! ---------------------- SYSTEM COUPLING ----------------------- !!!!

! Specify what is being coupled:  1=HRLDAS (offline Noah-LSM), 2=WRF, 3=NASA/LIS, 4=CLM
sys_cpl = 1

!!!! ------------------- MODEL INPUT DATA FILES ------------------- !!!!

! Specify land surface model gridded input data file (e.g.: "geo_em.d01.nc")
GEO_STATIC_FLNM = "./DOMAIN/geo_em.d01.nc"

! Specify the high-resolution routing terrain input data file (e.g.: "Fulldom_hires.nc")
GEO_FINEGRID_FLNM = "./DOMAIN/Fulldom_hires.nc"

! Specify the spatial hydro parameters file (e.g.: "hydro2dtbl.nc")
! If you specify a filename and the file does not exist, it will be created for you.
HYDROTBL_F = "./DOMAIN/hydro2dtbl.nc"

! Specify spatial metadata file for land surface grid. (e.g.: "GEOGRID_LDASOUT_Spatial_Metadata.nc")
LAND_SPATIAL_META_FLNM = "./DOMAIN/GEOGRID_LDASOUT_Spatial_Metadata.nc"

! Specify the name of the restart file if starting from restart...comment out with '!' if not...
RESTART_FILE  = 'RESTART/HYDRO_RST.2017-11-11_00:00_DOMAIN1'

!!!! --------------------- MODEL SETUP OPTIONS -------------------- !!!!

! Specify the domain or nest number identifier...(integer)
IGRID = 1

! Specify the restart file write frequency...(minutes)
! A value of -99999 will output restarts on the first day of the month only.
rst_dt = 14400

! Reset the LSM soil states from the high-res routing restart file (1=overwrite, 0=no overwrite)
! NOTE: Only turn this option on if overland or subsurface rotuing is active!
rst_typ = 1

! Restart file format control
rst_bi_in = 0       !0: use netcdf input restart file (default)
                    !1: use parallel io for reading multiple restart files, 1 per core
rst_bi_out = 0      !0: use netcdf output restart file (default)
                    !1: use parallel io for outputting multiple restart files, 1 per core

! Restart switch to set restart accumulation variables to 0 (0=no reset, 1=yes reset to 0.0)
RSTRT_SWC = 0

! Specify baseflow/bucket model initialization...(0=cold start from table, 1=restart file)
GW_RESTART = 1

!!!! -------------------- MODEL OUTPUT CONTROL -------------------- !!!!

! Specify the output file write frequency...(minutes)
out_dt = 60

! Specify the number of output times to be contained within each output history file...(integer)
!   SET = 1 WHEN RUNNING CHANNEL ROUTING ONLY/CALIBRATION SIMS!!!
!   SET = 1 WHEN RUNNING COUPLED TO WRF!!!
SPLIT_OUTPUT_COUNT = 1

! Specify the minimum stream order to output to netcdf point file...(integer)
! Note: lower value of stream order produces more output.
order_to_write = 1

! Flag to turn on/off new I/O routines: 0 = deprecated output routines (use when running with Noah LSM),
! 1 = with scale/offset/compression, ! 2 = with scale/offset/NO compression,
! 3 = compression only, 4 = no scale/offset/compression (default)
io_form_outputs = 4

! Realtime run configuration option:
! 0=all (default), 1=analysis, 2=short-range, 3=medium-range, 4=long-range, 5=retrospective,
! 6=diagnostic (includes all of 1-4 outputs combined)
io_config_outputs = 0

! Option to write output files at time 0 (restart cold start time): 0=no, 1=yes (default)
t0OutputFlag = 1

! Options to output channel & bucket influxes. Only active for UDMP_OPT=1.
! Nonzero choice requires that out_dt above matches NOAH_TIMESTEP in namelist.hrldas.
! 0=None (default), 1=channel influxes (qSfcLatRunoff, qBucket)
! 2=channel+bucket fluxes    (qSfcLatRunoff, qBucket, qBtmVertRunoff_toBucket)
! 3=channel accumulations    (accSfcLatRunoff, accBucket) *** NOT TESTED ***
output_channelBucket_influx = 0

! Output netcdf file control
CHRTOUT_DOMAIN = 0           ! Netcdf point timeseries output at all channel points (1d)
                             !      0 = no output, 1 = output
CHANOBS_DOMAIN = 1           ! Netcdf point timeseries at forecast points or gage points (defined in Routelink)
                             !      0 = no output, 1 = output at forecast points or gage points.
CHRTOUT_GRID = 0             ! Netcdf grid of channel streamflow values (2d)
                             !      0 = no output, 1 = output
                             !      NOTE: Not available with reach-based routing
LSMOUT_DOMAIN = 0            ! Netcdf grid of variables passed between LSM and routing components (2d)
                             !      0 = no output, 1 = output
                             !      NOTE: No scale_factor/add_offset available
RTOUT_DOMAIN = 1             ! Netcdf grid of terrain routing variables on routing grid (2d)
                             !      0 = no output, 1 = output
output_gw = 1                ! Netcdf GW output
                             !      0 = no output, 1 = output
outlake  = 0                 ! Netcdf grid of lake values (1d)
                             !      0 = no output, 1 = output
frxst_pts_out = 0            ! ASCII text file of forecast points or gage points (defined in Routelink)
                             !      0 = no output, 1 = output

!!!! ------------ PHYSICS OPTIONS AND RELATED SETTINGS ------------ !!!!

! Specify the number of soil layers (integer) and the depth of the bottom of each layer... (meters)
! Notes: In Version 1 of WRF-Hydro these must be the same as in the namelist.input file.
!      Future versions will permit this to be different.
NSOIL=4
ZSOIL8(1) = -0.10
ZSOIL8(2) = -0.40
ZSOIL8(3) = -1.00
ZSOIL8(4) = -2.00

! Specify the grid spacing of the terrain routing grid...(meters)
DXRT = 100.0

! Specify the integer multiple between the land model grid and the terrain routing grid...(integer)
AGGFACTRT = 10

! Specify the channel routing model timestep...(seconds)
DTRT_CH = 10

! Specify the terrain routing model timestep...(seconds)
DTRT_TER = 10

! Switch to activate subsurface routing...(0=no, 1=yes)
SUBRTSWCRT = 1

! Switch to activate surface overland flow routing...(0=no, 1=yes)
OVRTSWCRT = 1

! Specify overland flow routing option: 1=Seepest Descent (D8) 2=CASC2D (not active)
! NOTE: Currently subsurface flow is only steepest descent
rt_option = 1

! Switch to activate channel routing...(0=no, 1=yes)
CHANRTSWCRT = 1

! Specify channel routing option: 1=Muskingam-reach, 2=Musk.-Cunge-reach, 3=Diff.Wave-gridded
channel_option = 3

! Specify the reach file for reach-based routing options (e.g.: "Route_Link.nc")
!route_link_f = "./DOMAIN/Route_Link.nc"

! If using channel_option=2, activate the compound channel formulation? (Default=.FALSE.)
! This option is currently only supported if using reach-based routing with UDMP=1.
compound_channel = .FALSE.

! Specify the lake parameter file (e.g.: "LAKEPARM.nc").
! Note REQUIRED if lakes are on.
!route_lake_f = "./DOMAIN/LAKEPARM.nc"

! Switch to activate baseflow bucket model...(0=none, 1=exp. bucket, 2=pass-through)
GWBASESWCRT = 1

! Groundwater/baseflow 2d mask specified on land surface model grid (e.g.: "GWBASINS.nc")
!Note: Only required if baseflow  model is active (1 or 2) and UDMP_OPT=0.
gwbasmskfil = "./DOMAIN/GWBASINS.nc"

! Groundwater bucket parameter file (e.g.: "GWBUCKPARM.nc")
GWBUCKPARM_file = "./DOMAIN/GWBUCKPARM.nc"

! User defined mapping, such NHDPlus: 0=no (default), 1=yes
UDMP_OPT = 0

! If on, specify the user-defined mapping file (e.g.: "spatialweights.nc")
!udmap_file = "./DOMAIN/spatialweights.nc"

/

&NUDGING_nlist

! Path to the "timeslice" observation files.
timeSlicePath = "./nudgingTimeSliceObs/"

nudgingParamFile = "DOMAIN/nudgingParams.nc"

! Nudging restart file = "nudgingLastObsFile"
! nudgingLastObsFile defaults to '', which will look for nudgingLastObs.YYYY-mm-dd_HH:MM:SS.nc
!   **AT THE INITALIZATION TIME OF THE RUN**. Set to a missing file to use no restart.
!nudgingLastObsFile = '/a/nonexistent/file/gives/nudging/cold/start'

!! Parallel input of nudging timeslice observation files?
readTimesliceParallel = .TRUE.

! temporalPersistence defaults to true, only runs if necessary params present.
temporalPersistence = .FALSE.

! The total number of last (obs, modeled) pairs to save in nudgingLastObs for
! removal of bias. This is the maximum array length. (This option is active when persistBias=FALSE)
! (Default=960=10days @15min obs resolution, if all the obs are present and longer if not.)
nLastObs = 960

! If using temporalPersistence the last observation persists by default.
! This option instead persists the bias after the last observation.
persistBias = .FALSE.

! AnA (FALSE)  vs Forecast (TRUE) bias persistence.
! If persistBias: Does the window for calculating the bias end at
! model init time (=t0)?
! FALSE = window ends at model time (moving),
! TRUE = window ends at init=t0(fcst) time.
! (If commented out, Default=FALSE)
! Note: Perfect restart tests require this option to be .FALSE.
biasWindowBeforeT0 = .FALSE.

! If persistBias: Only use this many last (obs, modeled) pairs. (If Commented out, Default=-1*nLastObs)
! > 0: apply an age-based filter, units=hours.
! = 0: apply no additional filter, use all available/usable obs.
! < 0: apply an count-based filter, units=count
maxAgePairsBiasPersist = -960

! If persistBias: The minimum number of last (obs, modeled) pairs, with age less than
! maxAgePairsBiasPersist, required to apply a bias correction. (default=8)
minNumPairsBiasPersist = 8

! If persistBias: give more weight to observations closer in time? (default=FALSE)
invDistTimeWeightBias = .TRUE.

! If persistBias: "No constructive interference in bias correction?", Reduce the bias adjustment
! when the model and the bias adjustment have the same sign relative to the modeled flow at t0?
! (default=FALSE)
! Note: Perfect restart tests require this option to be .FALSE.
noConstInterfBias = .FALSE.

/
In [7]:
%%bash
cat ~/wrf-hydro-training/example_case/Gridded/namelist.hrldas
&NOAHLSM_OFFLINE

HRLDAS_SETUP_FILE = "./DOMAIN/wrfinput_d01.nc"
INDIR = "./FORCING"
SPATIAL_FILENAME = "./DOMAIN/soil_properties.nc"
OUTDIR = "./"

START_YEAR  = 2017
START_MONTH = 11
START_DAY   = 11
START_HOUR  = 00
START_MIN   = 00

RESTART_FILENAME_REQUESTED = "RESTART/RESTART.2017111100_DOMAIN1"

! Specification of simulation length in days OR hours
KDAY = 19
! KHOUR = 8

! Physics options (see the documentation for details)
DYNAMIC_VEG_OPTION                = 4
CANOPY_STOMATAL_RESISTANCE_OPTION = 1
BTR_OPTION                        = 1
RUNOFF_OPTION                     = 3
SURFACE_DRAG_OPTION               = 1
FROZEN_SOIL_OPTION                = 1
SUPERCOOLED_WATER_OPTION          = 1
RADIATIVE_TRANSFER_OPTION         = 3
SNOW_ALBEDO_OPTION                = 2
PCP_PARTITION_OPTION              = 1
TBOT_OPTION                       = 2
TEMP_TIME_SCHEME_OPTION           = 3
GLACIER_OPTION                    = 2
SURFACE_RESISTANCE_OPTION         = 4

! Timesteps in units of seconds
FORCING_TIMESTEP = 10800
NOAH_TIMESTEP    = 3600
OUTPUT_TIMESTEP  = 3600

! Land surface model restart file write frequency
RESTART_FREQUENCY_HOURS = 240

! Split output after split_output_count output times.
SPLIT_OUTPUT_COUNT = 1

! Soil layer specification
NSOIL=4
soil_thick_input(1) = 0.10
soil_thick_input(2) = 0.30
soil_thick_input(3) = 0.60
soil_thick_input(4) = 1.00

! Forcing data measurement height for winds, temp, humidity
ZLVL = 10.0

! Restart file format options
rst_bi_in = 0      !0: use netcdf input restart file
                   !1: use parallel io for reading multiple restart files (1 per core)
rst_bi_out = 0     !0: use netcdf output restart file
                   !1: use parallel io for outputting multiple restart files (1 per core)

/

&WRF_HYDRO_OFFLINE

! Specification of forcing data:  1=HRLDAS-hr format, 2=HRLDAS-min format, 3=WRF,
!    4=Idealized, 5=Idealized w/ Spec. Precip.,
!    6=HRLDAS-hourly fomat w/ Spec. Precip., 7=WRF w/ Spec. Precip.,
!    9=Channel-only forcing, see hydro.namelist output_channelBucket_influxes
!    10=Channel+Bucket only forcing, see hydro.namelist output_channelBucket_influxes
FORC_TYP = 1

/

Creating a simulation directory

Now that we have covered the major functional elements that constitute a simulation, we will combine these elements and construct a simulation. This is done by placing the FORCING, Gridded/DOMAIN, trunk/NDHMS/Run directories and namelist.hrldas and hydro.namelist files together in a directory that will be our simulation directory. However, to save disk space it is often preferable to create symbolic links rather than copying the actual files. NOTE: We will only use symbolic links with files that we will NOT be editing

fig2.png

In the following steps, we will construct our simulation directory.

Step 1. Create simulation directory

We will create a directory for our simulation

In [8]:
%%bash
mkdir -p ~/wrf-hydro-training/output/lesson2/run_gridded_default
ls ~/wrf-hydro-training/output/lesson2/
run_gridded_default

Step 2. Copy model run files

We will copy the required model run files from the ~/wrf-hydro-training/wrf_hydro_nwm_public/trunk/NDHMS/Run directory. These files are small so we will make actual copies rather than symbolic links in this case. Additionally, copies are preferred in this case because a user may want to edit the *.TBL files and as stated previously symbolic links should not be used with files that we may edit.

In [9]:
%%bash
cp ~/wrf-hydro-training/wrf_hydro_nwm_public/trunk/NDHMS/Run/*.TBL \
~/wrf-hydro-training/output/lesson2/run_gridded_default

cp ~/wrf-hydro-training/wrf_hydro_nwm_public/trunk/NDHMS/Run/wrf_hydro.exe \
~/wrf-hydro-training/output/lesson2/run_gridded_default

ls ~/wrf-hydro-training/output/lesson2/run_gridded_default
CHANPARM.TBL
GENPARM.TBL
HYDRO.TBL
MPTABLE.TBL
SOILPARM.TBL
wrf_hydro.exe

Step 3. Symlink DOMAIN files

We will create symbolic links to the required domain files from the ~/wrf-hydro-training/DOMAIN/Gridded directory. These files can be large so we will make symbolic links rather than copying the actual files. NOTE: Because we are using symbolic links, the paths MUST be absolute and can't use ~. We can simply replace the ~ with the $HOME environment variable to make an absolute path.

In [10]:
%%bash
cp -as $HOME/wrf-hydro-training/example_case/FORCING \
~/wrf-hydro-training/output/lesson2/run_gridded_default

cp -as $HOME/wrf-hydro-training/example_case/Gridded/DOMAIN \
~/wrf-hydro-training/output/lesson2/run_gridded_default

cp -as $HOME/wrf-hydro-training/example_case/Gridded/RESTART \
~/wrf-hydro-training/output/lesson2/run_gridded_default

ls ~/wrf-hydro-training/output/lesson2/run_gridded_default
CHANPARM.TBL
DOMAIN
FORCING
GENPARM.TBL
HYDRO.TBL
MPTABLE.TBL
RESTART
SOILPARM.TBL
wrf_hydro.exe

Step 4. Copy namelist files

Because we are using the default prepared namelists from the example WRF-Hydro domain, we will copy those in as well. If you were using your own namelists, they would likely be edited and copied from elsewhere. These are small text files so we will make actual copies rather than symbolic links.

In [11]:
%%bash
cp ~/wrf-hydro-training/example_case/Gridded/namelist.hrldas \
~/wrf-hydro-training/output/lesson2/run_gridded_default

cp ~/wrf-hydro-training/example_case/Gridded/hydro.namelist \
~/wrf-hydro-training/output/lesson2/run_gridded_default

ls ~/wrf-hydro-training/output/lesson2/run_gridded_default
CHANPARM.TBL
DOMAIN
FORCING
GENPARM.TBL
HYDRO.TBL
MPTABLE.TBL
RESTART
SOILPARM.TBL
hydro.namelist
namelist.hrldas
wrf_hydro.exe

We have now constructed our simulation directory with all the requisite files. In the next section we will run the simulation using mpi.

Running WRF-Hydro using default run-time options

Now that we have constructed our simulation directory, we can run our simulation. For this we will be using the mpi run command, which has a number of arguments. For this simple case, we only need to supply one argument, the number of cores. This is done with the -np argument, and we will set it to 4 cores.

We will pipe the output to a log file because running a simulation can generate a lot of standard output in the terminal.

In [12]:
%%bash
cd ~/wrf-hydro-training/output/lesson2/run_gridded_default
mpirun -np 2 ./wrf_hydro.exe >> run.log 2>&1

If your simulation ran successfully, there should now be a large number of output files in the ~/wrf-hydro-training/output/lesson2/run_gridded_default. We will describe these output files in more depth in Lesson 4. Additionally, detailed descriptions of the output files can be found in the Technical Description.

List the contents of the ~/mount/wrf-hydro-training/output/lesson2/run_gridded_default directory.

In [13]:
%%bash
ls ~/wrf-hydro-training/output/lesson2/run_gridded_default | tail -40
201711291900.CHANOBS_DOMAIN1
201711291900.GWOUT_DOMAIN1
201711291900.LDASOUT_DOMAIN1
201711291900.RTOUT_DOMAIN1
201711292000.CHANOBS_DOMAIN1
201711292000.GWOUT_DOMAIN1
201711292000.LDASOUT_DOMAIN1
201711292000.RTOUT_DOMAIN1
201711292100.CHANOBS_DOMAIN1
201711292100.GWOUT_DOMAIN1
201711292100.LDASOUT_DOMAIN1
201711292100.RTOUT_DOMAIN1
201711292200.CHANOBS_DOMAIN1
201711292200.GWOUT_DOMAIN1
201711292200.LDASOUT_DOMAIN1
201711292200.RTOUT_DOMAIN1
201711292300.CHANOBS_DOMAIN1
201711292300.GWOUT_DOMAIN1
201711292300.LDASOUT_DOMAIN1
201711292300.RTOUT_DOMAIN1
201711300000.CHANOBS_DOMAIN1
201711300000.GWOUT_DOMAIN1
201711300000.LDASOUT_DOMAIN1
201711300000.RTOUT_DOMAIN1
CHANPARM.TBL
DOMAIN
FORCING
GENPARM.TBL
HYDRO.TBL
HYDRO_RST.2017-11-21_00:00_DOMAIN1
MPTABLE.TBL
RESTART
RESTART.2017112100_DOMAIN1
SOILPARM.TBL
diag_hydro.00000
diag_hydro.00001
hydro.namelist
namelist.hrldas
run.log
wrf_hydro.exe

There are also four important files for determining the success or failure of the run, diag_hydro.0000*. The number of diag files is equal to the number of cores used for the run. These diag_hydro.0000* files contain logs and diagnostics on the simulation run. Since we ran using 4 cores, we have 4 diag_hydro.0000* files.

You can check that your simulation ran successfully by examining the last line of the diag files, which should read The model finished successfully........

In [14]:
%%bash
tail -1 ~/wrf-hydro-training/output/lesson2/run_gridded_default/diag_hydro.00000
 The model finished successfully.......

Next up - Basics of working with WRF-Hydro outputs

This concludes Lesson 2. In the next lesson, we will briefly discuss working with some of the output files. The output files from WRF-Hydro are standard netCDF4 files, and thus there are many way to work with these data. In lesson 3 we will simply cover a few Python libraries and commands that we will need for later lessons in this tutorial. Lesson 3 is by no means a comprehensive guide to working with netCDF files.

IT IS BEST TO EITHER SHUTDOWN THIS LESSON OR CLOSE IT BEFORE PROCEEDING TO THE NEXT LESSON TO AVOID POSSIBLY EXCEEDING ALLOCATED MEMORY. Shutdown the lesson be either closing the browser tab for the lesson or selecting KERNAL->SHUTDOWN in the jupyter notebook toolbar.