This example of SR-LSTM only works on the following modules:
- NeuralHydrology Python library in version 1.3.0,
- Raven Hydrologic Modelling Framework in version 3.7,
- RDRS forcing data in version 2.1 downloaded from CaSPAr,
- Streamflow/discharge observations that in same format as the CSV files downloaded from WSC
basinmaker
-- Folder that contains the DEM and HRU data used for BasinMaker routing delineationdata
discharge_obs/
-- Folder that contains the streamflow/discharge observation CSV filesUSGS_discharge_data.ipynb
-- Jupyter notebook to convert the format of USGS data to WSC formatforcing_csvs/
-- Folder that will contain the lumped forcings for each subbasins, as well as the lumped forcings for the basingridded/
-- Folder that contains the gridded source dataset for calculating the static attributesdem/
-- DEM data (HydroSHEDS DEM 3s, merged viario merge *.bil na_ca_dem_3s.tif
)landcover/
-- Landcover data (NALCMS)soil/
-- Soil data (GSDE netCDF files (BD, CLAY, GRAV, OC, SAND, SILT))
rdrs_downloads/
-- Folder that contains the gridded RDRS forcing datarouting_networks
-- Folder that will contain the BasinMaker-created routing network files
model
time_series/
-- Folder that will contain the netCDF files with forcings and dischargeattributes/
-- Folder that will contain the CSV files with static attributesbasins/
-- Folder that will contain basin-IDs txtfiletrain_basins.txt
trained_model
-- Folder that contains the trained LSTM lumped model files
raven
-- Folder that contains the Raven filesresults
-- Folder that will contain the simulation resultsscripts
-- Folder that contains the SR-LSTM codesderive_grid_weights.py
-- Script of Grid-Weights-Generatorensemble2netcdf.py
-- Script that takes the pickled ensemble reults file and creates one netCDF submission file with predictions
- Create a virtual environment in Anaconda3 using the provided
srlstm_environment.yml
- Download the RDRS v2.1 forcings from CaSPAr and place them in
data/rdrs_downloads/
- Download the gridded static attribute data and place them in corresponding folder under
data/gridded/
andbasinmaker/
- Place streamflow observation CSVs in
data/discharge_obs/
, if routing network is provided, rename the folder as*gaugeID_routing
and place it indata/routing_networks
- Place validated NeuralHydrology LSTM model files in
model/trained_model/
- Create a txtfile named as
train_basins.txt
which contains the IDs of basins used to train the LSTM model, and place it inmodel/basins/
- Edit the input parameters in
scripts/run.py
, and run the script to start the simulation
--watershed
-- the gauged basin ID--experiment
-- define the delineation scheme- 'default' -- default mode, the threshold for minimum drainage area of subbasins will be 10% of the total basin area, lakes smaller than 5km^2 will be removed
- 'lumped' -- lumped mode, there will be no discretization
- 'allsublake' -- all-in mode, all subbasins and lakes in the routing product will be preserved
- 'MDAx_LAy' -- change x and y to define the threshold for subbasin area and lake area. e.g., MDA200_LA10 uses 200km^2 as the threshold to merge subbasins and 10km^2 to remove lakes
gauge_lat
-- latitude of the basin gauge under the WGS 84 geospatial reference system. Note that Google Maps uses this geospatial reference system.gauge_lon
-- longitude of the basin gauge under the WGS 84 geospatial reference system.start_date
end_date
-- the time range for simulation, use the format yyyy-mm-ddsave_forcing
-- optional, used if you want to save the generated subbasin-level forcings
Example:
os.system(f"python ./scripts/main.py --watershed 02KB001
--experiment MDA200_LA5
--gauge_lat 45.886111
--gauge_lon -77.315278
--start_date 1980-01-01
--end_date 1981-12-31
--save_forcing")