peterson-tim-j / awaper Goto Github PK
View Code? Open in Web Editor NEWBuilds netCDF files of the Bureau of Meteorology Australian Water Availability Project daily national climate grids and extract catchment average data
Builds netCDF files of the Bureau of Meteorology Australian Water Availability Project daily national climate grids and extract catchment average data
Hi Tim et al. We have been using AWAPer with success, but now the BOM link seems to be blocked:
library(AWAPer)
setwd('S:/PRJ-MuttamaCreek/AWAP')
makeNetCDF_file(ncdfFilename = 'AWAP_2000_2020.nc',
ncdfSolarFilename = 'AWAP_2000_2020_solar.nc',
updateFrom = as.Date("2000-01-01"),
updateTo = as.Date('2020-12-31'))
Starting to build both netCDF files.
... Testing downloading of AWAP precip. grid
... Getting grid gemoetry from file.
Error in readLines(a <- file(des.file.name)) : cannot open the connection
In addition: Warning messages:
1: In utils::download.file(url, des.file.name, quiet = T, mode = "wb") :
cannot open URL 'http://www.bom.gov.au/web03/ncc/www/awap/rainfall/totals/daily/grid/0.05/history/nat/2000010120000101.grid.Z': HTTP status was '403 Forbidden'
2: In readLines(a <- file(des.file.name)) :
cannot open file 'S:/PRJ-MuttamaCreek/AWAP/precip.20000101.grid': No such file or directory
Contact BOM?
In download.ASCII.file.R, if the OS is not windows des.file.name is renamed (and then subsequently deleted in MakeNetCDF_file.R).
des.file.name = gsub('.Z', '.gz', des.file.name)
But if the OS is windows then there is an additional manipulation using 7zip and the result is a .grid.Z file that is not deleted.
For consistency after Line 22 the .Z file needs to be deleted in download.ASCII.file.R by inserting the second line of code as below:
exitMessage = system(paste0('7z e -aoa -bso0 "',des.file.name, '"'),intern = T)
file.remove(des.file.name)
A user has requested the above option
minor issue: on Mac, getDEM leaves a large file behind (~1.7GB) named "GA_9second_v3\ESRI\dem-9s.asc"
Mac OSX 10.14.6
Rstudio 1.2.5001
R version 3.6.1
AWAPer 3 June 2019
First of all nice package.
I've encountered an issue with the solar radiation netCDF file. I used makeNetCDF_file
to download data from 1990 to 2018 and it seems from 2010-11-01 no rad data is saved. I looked at the downloaded grid files and they don't seem to have a problem.
Here is a plot for mean solar radiation for Australia from 1990-2018 showing when the issue occurs:
It seems that dates from 2010-11-01 have fill values.
Here is a reproducible example:
library(AWAPer)
library(lubridate)
library(ncdf4)
setwd('~/Data/AWAPer/')
# Make a new netCDF file
makeNetCDF_file(updateFrom = as.Date('2010-10-27'),
updateTo = as.Date('2010-11-03'),
keepFiles = TRUE)
# Open solar file
nc = nc_open('AWAP_solar.nc')
solar = nc$var$solarrad
varsize = solar$varsize
ndims = solar$ndims
nt = varsize[ndims]
time = ncvar_get(nc, 'time')
origin = as_date('1990-01-01')
delta = as.integer(as_date('2010-11-01') - origin)
indx = delta + 1
print(origin + time[indx])
start = c(1, 1, indx)
count = varsize
count[ndims] = 1
slice = ncvar_get(nc, varid = 'solarrad', start=start, count=count)
image(slice)
nc_close(nc)
# Look at downloaded grid file
library(raster)
R.utils::gunzip('solarrad20101101.grid.gz')
r = raster('solarrad20101101.grid')
plot(r)
# No problems
# precip/tmax etc is unaffected
nc = nc_open('AWAP.nc')
precip = nc$var$precip
varsize = precip$varsize
ndims = precip$ndims
nt = varsize[ndims]
time = ncvar_get(nc, 'time')
origin = as_date('1900-01-01')
delta = as.integer(as_date('2010-11-01') - origin)
indx = delta + 1
print(origin + time[indx])
start = c(1, 1, indx)
count = varsize
count[ndims] = 1
slice = ncvar_get(nc, varid = 'precip', start=start, count=count)
image(slice)
nc_close(nc)
Instead of using the working directory as supplied as an argument
Thanks so much for this code. Using the netCDF format is incredibly efficient and I am slowly getting my head how to work with it.
Listed below are a couple of quirks that I ran into when running makeNetCDF_file() in a windows environment, using R version 3.4.4:
Windows doesn't have a linux "znew" equivalent: I got around this by replacing:
system(paste('znew -f ',destFile));
destFile = file.path(workingFolder,paste('precip.',filedate_str,'.grid.gz',sep=''))
raw<-textConnection(readLines(a<-gzfile(destFile)));
with
system(paste('7z e -aoa -bso0 ',destFile));
destFile = file.path(workingFolder,paste('precip.',filedate_str,'.grid',sep=''))
raw<-textConnection(readLines(a<-file(destFile)));
for some reason which I don't understand, it helps to then close both "raw" and "a" connections. Note the flags in 7zip are "e" for extract, "-aoa" to overwrite existing files, and "-bso0" to stop printing successful run messages.
For some reason, in download.url the flag "mode = "wb" is needed. Also when the URL doesn't exist, the code spits out an error and refuses to continue. My not very elegant solution is below. Note that I had to change the initialization of the didFail flag and also the destFile name from the original code as I am reading in the ascii grid, and not the .gz file as in the original code:
didFail_precip=0
if (!is.na(urlPrecip)) {
url = paste(urlPrecip,datestring, datestring,'.grid.Z',sep='')
destFile_precip = file.path(workingFolder,paste('precip.',datestring,'.grid.Z',sep=''))
didFail_precip = tryCatch({download.file(url,destFile_precip, quiet = T, mode = "wb")},error = function(cond) {return(TRUE)})
if (didFail_precip==0) {
system(paste('7z e -aoa -bso0 ',destFile_precip));
destFile_precip = gsub('.Z', '', destFile_precip)
}
}
When I calculated the PET by using Morton CRWE or Morton CARE. The monthly sum of daily ET(xxx$ET_mm) values is slightly off compared to the monthly estimates.
xinyangf1 found that the extraction of ET can crash when the upper data is close to 1990, ie the start of the solar radiation record.
Dear Tim,
AWAPer stopped working when there was a missing data point in the input data-frame used for the calculation of ET. I fixed that changing the following line in "extractCatchmentData":
576 - missing_method= NULL, abnormal_method='DoY average', message = "no")
576 - missing_method='DoY average', abnormal_method='DoY average', message = "no")
Error in if (sum(ind) == 1) { : missing value where TRUE/FALSE needed
When extracting at a set of points (instead of catchment polygons) the data for the first set of coordinates is returned for each subsequent set of coordinates.
I am not sure where this error is originating but my guess is it might be line 263:
catchment.lookup = cbind(rep(1,length(catchments)),rep(1,length(catchments)));
The indices returned I think should increment by 1 with increasing row number, but currently they are all one. I think all the data is extracted from the netcdf fine, but then at line 412:
ind = catchment.lookup[i,1]:catchment.lookup[i,2]
The data returned is always for the first set of coordinates.
I am not sure as I haven't tested this but these are my thoughts at the moment.
In getDEM, the variable DEMfilename.2unzip is "GA_9second_v3\ESRI\dem-9s.asc" but unzipping at Line 59 creates dem-9s.asc in the working directory. Hence the following line that is executed at Line 62 should be changed from:
DEM <- sp::read.asciigrid(DEMfilename.2unzip,colname='DEM', proj4string = sp::CRS("+proj=longlat +ellps=GRS80"))
to
DEM <- sp::read.asciigrid(basename(DEMfilename.2unzip),colname='DEM', proj4string = sp::CRS("+proj=longlat +ellps=GRS80"))
in order to remove the current error
In file(fname, "r") :
cannot open file 'GA_9second_v3\ESRI\dem-9s.asc': No such file or directory
Hello Tim,
At line 905 in "extractCatchmentData" the code is:
catchmentAvgTmp = data.frame(CatchmentID=rep(catchments$CatchID[i],nGridCells))
This means the user must specify CatchID as a field in their shapefile - either this needs to be specified in the user manual or maybe the following can be used instead:
catchmentAvgTmp = data.frame(CatchmentID=rep(catchments@data[i,1],nGridCells))
I wanted to check with you first before making any changes as I am still getting comfortable with shapefiles in R.
Cannot load multiple packages using the command (line 2) in Example 6:
library(c('Evapotranspiration', 'ncdf4', 'R.utils', 'raster', 'chron', 'maptools', 'sp', 'zoo', 'methods', 'xts')
When the extraction comment has getSolarrad=F but when getET=T the following error occurs. A check on getSolarrad=T is required.
climateData = extractCatchmentData(ncdfFilename = '/home/tim/Documents/Research/DataSets/AWAPer/AWAP.nc',
ncdfSolarFilename = '/home/tim/Documents/Research/DataSets/AWAPer/AWAP_solar.nc',
extractFrom='1950-1-1',extractTo = as.Date(Sys.Date()-60, "%Y-%m-%d"),
getTmin=T, getTmax=T,getVprp=T, getSolarrad=F,getET=T,catchments=coordinates.data,
DEM=DEM_9s,ET.constants=constants);
WARNING: The projection string of the catchment boundaries does not appear to be +proj=longlat +ellps=GRS80. Attempting to transform coordinates...
Extraction data summary:
NetCDF non-solar radiation climate data exists from 1900-01-01 to 2020-10-23
Data will be extracted from 1950-01-01 to 2020-08-25 at 6 catchments
Starting data extraction:
... Building catchment weights:
... Starting to extract data across all catchments:
... Extracting data for time point 1825 of 25805
... Extracting data for time point 3650 of 25805
... Extracting data for time point 5475 of 25805
... Extracting data for time point 7300 of 25805
... Extracting data for time point 9125 of 25805
... Extracting data for time point 10950 of 25805
... Extracting data for time point 12775 of 25805
... Extracting data for time point 14600 of 25805
... Extracting data for time point 16425 of 25805
... Extracting data for time point 18250 of 25805
... Extracting data for time point 20075 of 25805
... Extracting data for time point 21900 of 25805
... Extracting data for time point 23725 of 25805
... Extracting data for time point 25550 of 25805
... Calculating catchment weighted daily data.
... Calculating PET for grid cell 1 of 1
Error in extractCatchmentData(ncdfFilename = "/home/tim/Documents/Research/DataSets/AWAPer/AWAP.nc", :
object 'DEMpoints' not found
In addition: There were 50 or more warnings (use warnings() to see the first 50)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.