unidata / thredds Goto Github PK
View Code? Open in Web Editor NEWTHREDDS Data Server v4.6
Home Page: https://www.unidata.ucar.edu/software/tds/v4.6/index.html
THREDDS Data Server v4.6
Home Page: https://www.unidata.ucar.edu/software/tds/v4.6/index.html
The processing time of the FeatureDataset calcBounds function on aggregations seems to have dramatically increased from version 4.3 of netCDF Java to version 4.5. When running the following test program the calcBounds operation took 1401 ms with the netcdf-4.3.23.jar and 7008 ms with netcdf4-4.5.4.jar:
NetcdfDataset dataset = NcMLReader.readNcML(testFileName, null);
FeatureType type = FeatureDatasetFactoryManager.findFeatureType(dataset);
FeatureDataset featureDataset = findFeatureDataset(type, dataset);
featureDataset.calcBounds();
ncWMS failing with FMRC featureCollection on TDS Version 4.5.4-SNAPSHOT - 20141008.1804, giving this error:
2014-11-13T15:30:15.753 +0100 [ 280992][ 27] ERROR - thredds.server.wms.ThreddsWmsController - dispatchWmsRequest(): Exception:
java.lang.IllegalArgumentException: The calendar system proleptic_gregorian cannot be handled
at uk.ac.rdg.resc.edal.cdm.CdmUtils.getTimesteps(CdmUtils.java:409) ~[CdmUtils.class:1.0.tds.4.4.0]
is this the same problem described here:
http://sourceforge.net/p/ncwms/mailman/message/32618474/
I'm trying to specify more than one convention in an FMRC.
Inside the <featureCollection>
tag in the catalog NcML
, I've got:
<attribute name="Conventions" value="CF-1.4, SGRID-0.1"/>
but when I look at the FMRC:
http://geoport.whoi.edu/thredds/dodsC/coawst_4/use/fmrc/coawst_4_use_best.ncd.html
the result is: Conventions: CF-1.4, _Coordinates
.
This is the case on TDS 4.3.21 and on 4.5.4.
I also notice that on aggregations on motherlode:
http://thredds.ucar.edu/thredds/dodsC/satellite/3.9/WEST-CONUS_4km.html
we have Conventions: _Coordinates
.
It would appear that FMRC is selecting only the 1st specified convention and then adding _Coordinates
.
How can I modify my NcML so that the resulting conventions are Conventions: CF-1.4, SGRID-0.1
?
Times are showing up incorrectly in the toolsUI's PointFeature viewer for orthogonal timeSeriesProfiles. For example:
The profileName, obsDate, and nomDate all show the time of the first profile in the dataset. The incomplete/ragged variants of timeSeriesProfile seem to work fine.
As discussed in the pycsw issue here: geopython/pycsw#269, catalog services that rely on OWSlib like pycsw expect to have gmd:protocol
specified in order to populate the scheme in the references that get returned from a CSW request. Without this, SOS, WMS, OpeNDAP endpoints, etc always come back with scheme of None.
Should just need to modify the UnidataDD2MI.xsl
file:
https://github.com/Unidata/thredds/blob/f902e5a3573583a9b87f1d3c7d2ff27a121289fe/tds/src/main/webapp/WEB-INF/classes/resources/xsl/nciso/UnidataDD2MI.xsl
@pacioos, I know you've addressed this -- can you please submit a PR to fix this?
I raised the same issue at ethanrd/threddsIso#2 because I wasn't sure what the development path is on ncISO, and maybe we need to fix in both places?
The version of TDS 4.6 released on Mar 26, 2014 still has this old UnidataDD2MI.xml file from 2012:
https://github.com/Unidata/thredds/blob/4.6.0/tds/src/main/webapp/WEB-INF/classes/resources/xsl/nciso/UnidataDD2MI.xsl
This needs to be replaced with this one:
https://github.com/Unidata/threddsIso/blob/master/src/main/resources/xsl/nciso/UnidataDD2MI.xsl
$git describe
v4.3.14
$ java -jar ui/target/toolsUI-4.3.14.jar
Exception in thread "main" java.lang.SecurityException: no manifiest section for signature file entry org/bouncycastle/cms/CMSSignedDataStreamGenerator$TeeOutputStream.class
at sun.security.util.SignatureFileVerifier.verifySection(SignatureFileVerifier.java:380)
at sun.security.util.SignatureFileVerifier.processImpl(SignatureFileVerifier.java:231)
at sun.security.util.SignatureFileVerifier.process(SignatureFileVerifier.java:176)
at java.util.jar.JarVerifier.processEntry(JarVerifier.java:288)
at java.util.jar.JarVerifier.update(JarVerifier.java:199)
at java.util.jar.JarFile.initializeVerifier(JarFile.java:323)
at java.util.jar.JarFile.getInputStream(JarFile.java:388)
at sun.misc.URLClassPath$JarLoader$2.getInputStream(URLClassPath.java:692)
at sun.misc.Resource.cachedInputStream(Resource.java:61)
at sun.misc.Resource.getByteBuffer(Resource.java:144)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:256)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
The class TestTDSdataset.java in ui expects to get some Util functionalities from thredds.crawl.Util; but this package does not exist. How to fix this?
Hi @JohnLCaron
In version 4.6.0 GeoGrid.makeSubset for String coordinates are throwing exception telling that numeric values are required.
In version 4.3.22 was working.
Regards
Antonio
P.S:
The problem appears to be in CoordinateAxis1D constructor:
https://github.com/Unidata/thredds/blob/4.6.0/cdm/src/main/java/ucar/nc2/dataset/CoordinateAxis1D.java#L115-L116
and section method:
https://github.com/Unidata/thredds/blob/4.6.0/cdm/src/main/java/ucar/nc2/dataset/CoordinateAxis1D.java#L158-L189
If the context path is been changed the TDS is complaining about a non-existent directory.
What I have made is to generate a directory with the same context path name in the content path.
Excerpt from my log file apache-tomcat-7.0.34/logs/localhost.2015-03-03.log
SEVERE: Exception sending context initialized event to listener instance of class org.springframework.web.context.ContextLoaderListener
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'ConfigCatalogManager': Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: private thredds.server.config.TdsContext thredds.core.ConfigCatalogManager.tdsContext; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'tdsContext' defined in file [/vols/oceano/services/TDS6/apache-tomcat-7.0.34/webapps/tds6/WEB-INF/classes/thredds/server/config/TdsContext.class]: Invocation of init method failed; nested exception is java.lang.IllegalStateException: Content directory not a directory
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:298)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1148)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:458)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:293)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:223)
...skipping...
it method failed; nested exception is java.lang.IllegalStateException: Content directory not a directory
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1514)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:521)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:458)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:293)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:223)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:290)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:191)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.findAutowireCandidates(DefaultListableBeanFactory.java:921)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:864)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:779)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:498)
... 29 more
Caused by: java.lang.IllegalStateException: Content directory not a directory
at thredds.server.config.TdsContext.afterPropertiesSet(TdsContext.java:335)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1573)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1511)
... 39 more
It looks like global attributes which reaches a certain length are not well processed by toolsUI-4.3.jar.
This file for example
has a quality_control_log global attribute which is not provided, although a simple
ncdump -h IMOS_ANMN-NSW_AETVZ_20131127T230000Z_PH100_FV01_PH100-1311-Workhorse-ADCP-109.5_END-20140306T010000Z_C-20140521T053527Z.nc | grep "quality_control_log"
can show it.
ncdump -h http://thredds.aodn.org.au/thredds/dodsC/IMOS/ANMN/NSW/PH100/Velocity/IMOS_ANMN-NSW_AETVZ_20131127T230000Z_PH100_FV01_PH100-1311-Workhorse-ADCP-109.5_END-20140306T010000Z_C-20140521T053527Z.nc | grep "quality_control_log"
doesn't show anything via OPeNDAP, so I'm guessing OPeNDAP might be using toolsUI or similar code to render the global attributes.
@JohnLCaron and @cwardgar, I spoke with Dave Foster (@daf) today, and although he said he would have liked to fix a few more bugs first, he was fine with you guys incorporating ncSOS into this Unidata/thredds repo, and work out the bugs here. The question we had was how this should work.
If you clone ncSOS and add to the thredds repo, do we then post issues on github or do we need to use the Unidata Jira?
Or do you have other ideas about how to manage the shared work?
@daf, please jump in if I've munged this topic up.
Thanks,
Rich
Hi,
NCEP is soon to change their GFS datasets (including a new field parameter), and due to this I am updating the versions of the grib and netcdf library I'm using from here. Unfortunately I have come across a problem with the QuasiRegular interpolation of datasets for a specific case.
90227d7#diff-3
Earlier this year, there was a change made to the QuasiRegular class wherein handling of edge cases was changed. The change caused the "eastern" edge to continue interpolation by wrapping around to the start of the "western" dataset row. Previously the "eastern" edge was (I think) more or less ignored. While this behaviour makes sense for "global" datasets that are 0 to 360 degrees, it does not make sense for datasets that are only a cut out section. The particular case where this has been a problem is GRIB Edition 1 WAFS files.
I am not entirely sure what the proper behaviour should be in this case, though a possibility might be to interpolate in reverse from the last point to the penultimate point. More likely might be to simply ignore it like previously happened. It should be possible to distinguish between a "global" dataset that wraps and a dataset that does not wrap around, and to pass an argument to the QuasiRegular algorithms.
In the latest branch there are further changes around this code with regard to implementing linear interpolation, and there appears to be the same problem in both linear and cubic interpolation in that version of the code.
Thanks,
Kohl
There appears to be a problem with the interpretation of specific GRIB2 time delta headers in THREDDS. The problem occurs with regard to the NCEP GEFS dataset (downloaded form http://nomads.ncep.noaa.gov/pub/data/nccf/com/gens/prod/gefs.20150326/00/pgrb2/). Until time step +252 things go correctly. From +258 onward, the GRIB2 headers change, causing THREDDS to come up with wrong variable names for some parameters. For example the maximum temperature shows up (via OpenDAP) as Maximum_temperature_height_above_ground_36_Hour_Maximum instead of Maximum_temperature_height_above_ground_6_Hour_Maximum.
A grib_compare of the +252 and +258 GRIB2 files yields the following relevant differences (ie. for the TMAX parameter):
long [indicatorOfUnitOfTimeRange]: [1] != [11]
long [forecastTime]: [246] != [42]
long [lengthOfTimeRange]: [6] != [1]
With indicatorOfUnitOfTimeRange 1 and lengthOfTimeRange 6, the time delta is correctly interpreted as 6 hours. With indicatorOfUnitOfTimeRange 11 and lengthOfTimeRange 1 the time delta is interpreted as 36 hours, which is incorrect. This also messes up the NcML Aggregation for the entire dataset.
Is there a way to work around this?
Server version:
opendap/3.7
THREDDS Data Server Version: 4.3.16
Build Date: 20130319.1353
Dear all,
I'm using TDS version 4.3.22 and I have configured NCSS servlet to serve datasets. The servlet is working, but when I'm tryn to access restricte datasets the servlet reports a 404 HTTP error.
I think it's a issue with the restricted acces configuration, because if I access to same dataset using the OpenDAP service the TDS challenges to me with the Basic authentication and after login I can access (partially) to the restricted dataset using the ncss. I said partially because only the web form is shown. The css and images files are not loaded and the requests are not working.
Regards
Antonio
A very simple joinExisting aggregation that was working with 4.2.x is not working with 4.2.9. May be related to scanning recursively for directories when using a scan element? I’ve tested on both Tomcat 6 and 7 using both JDK 6 and 7, the problem is never present in 4.2.6 but always present in 4.2.9.
I've restarted the Tomcat servers and cleared the cache directories multiple times.
Working:
Version 4.2.20110404.1849
Build Date = 2011-04-04 18:49:47
Build Name = 20110404.1849
Not Working:
Version 4.2.9
Build Date = 2011-11-08 17:58:27
Build Name = 9
Stack trace, catalog file, and directory listing are here: https://gist.github.com/1370929
Live TDS is here: http://tds.maracoos.org/thredds/MODIS.html
https://github.com/Unidata/thredds/blob/4.5.5/cdm/src/main/java/ucar/nc2/util/Misc.java#L300
Misc.getProtocols(String url) assumes that any :
in a file name is a protocol delimiter. *nix OSes allow :
in file names, so this needs to be fixed to eliminate false positives.
Currently this leads to some confusing exceptions where a .dds
suffix is added to the filename:
Exception in thread "main" ucar.httpservices.HTTPException: Malformed URL: /blah/blah/some_file_2014-04-13_16:00:00.nc.dds
at ucar.httpservices.HTTPMethod.<init>(HTTPMethod.java:198)
at ucar.httpservices.HTTPMethod.<init>(HTTPMethod.java:187)
at ucar.httpservices.HTTPFactory.Get(HTTPFactory.java:124)
at ucar.nc2.dataset.NetcdfDataset.checkIfDods(NetcdfDataset.java:857)
at ucar.nc2.dataset.NetcdfDataset.disambiguateHttp(NetcdfDataset.java:817)
at ucar.nc2.dataset.NetcdfDataset.openOrAcquireFile(NetcdfDataset.java:707)
at ucar.nc2.dataset.NetcdfDataset.acquireFile(NetcdfDataset.java:638)
at ucar.nc2.ncml.Aggregation$Dataset.acquireFile(Aggregation.java:683)
at ucar.nc2.ncml.AggregationExisting.buildNetcdfDataset(AggregationExisting.java:74)
at ucar.nc2.ncml.Aggregation.finish(Aggregation.java:429)
at ucar.nc2.ncml.NcMLReader.readNetcdf(NcMLReader.java:502)
at ucar.nc2.ncml.NcMLReader._readNcML(NcMLReader.java:457)
at ucar.nc2.ncml.NcMLReader.readNcML(NcMLReader.java:257)
at ucar.nc2.ncml.NcMLReader.readNcML(NcMLReader.java:207)
at ucar.nc2.dataset.NetcdfDataset.acquireNcml(NetcdfDataset.java:1085)
at ucar.nc2.dataset.NetcdfDataset.openOrAcquireFile(NetcdfDataset.java:724)
at ucar.nc2.dataset.NetcdfDataset.openDataset(NetcdfDataset.java:428)
at ucar.nc2.dataset.NetcdfDataset.openDataset(NetcdfDataset.java:411)
at ucar.nc2.dataset.NetcdfDataset.openDataset(NetcdfDataset.java:396)
at ucar.nc2.dataset.NetcdfDataset.openDataset(NetcdfDataset.java:383)
Ticket AYO-259890 may also be related.
In grib/src/main/java/ucar/grib/grib1/Grib1Data.java, "gdsv" doesn't get set if the GRIB file does not contain a Grid Definition Section, so will throw a NullPointerException further down:
(on line 136:)
new Grib1BinaryDataSection(raf, decimalScale, bms, gdsv.getScanMode(), gdsv.getNx(), gdsv.getNy() );
This could potentially be solved by using the alternate Grib1BinaryDataSection constructor when gdsv is null
I was surprised to discover that aggregations by default use a random dataset from the aggregation as the typical dataset (source of attribute values, etc).
https://github.com/Unidata/thredds/blob/4.5.5/cdm/src/main/java/ucar/nc2/ncml/Aggregation.java#L531
TypicalDataset.LATEST seems like a much more intuitive default to me. Would it make sense to change this?
@lesserwhirls , @dopplershift (not sure who is doing the ncWMS work), I just discovered that animations in Google Earth are no longer working. Just a single frame appears, with no time slider.
This TDS 4.5.4 kmz request DOES NOT display property in Google Earth (no time slider, no multiple frames):
http://geoport-dev.whoi.edu/thredds/wms/coawst_4/use/fmrc/coawst_4_use_best.ncd?LAYERS=temp&ELEVATION=-0.03125&TIME=2015-02-11T00:00:00.000Z,2015-02-12T00:00:00.000Z&TRANSPARENT=true&STYLES=boxfill%2Frainbow&CRS=EPSG%3A4326&COLORSCALERANGE=0%2C30&NUMCOLORBANDS=64&LOGSCALE=false&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&EXCEPTIONS=application%2Fvnd.ogc.se_inimage&FORMAT=application/vnd.google-earth.kmz&SRS=EPSG%3A4326&BBOX=-72.199045333399,40.121831318462,-63.954587564625,46.562813950316&WIDTH=512&HEIGHT=400
while in TDS 4.3, the same kmz request DOES display properly in Google Earth:
http://geoport.whoi.edu/thredds/wms/coawst_4/use/fmrc/coawst_4_use_best.ncd?LAYERS=temp&ELEVATION=-0.03125&TIME=2015-02-11T00:00:00.000Z,2015-02-12T00:00:00.000Z&TRANSPARENT=true&STYLES=boxfill%2Frainbow&CRS=EPSG%3A4326&COLORSCALERANGE=0%2C30&NUMCOLORBANDS=64&LOGSCALE=false&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&EXCEPTIONS=application%2Fvnd.ogc.se_inimage&FORMAT=application/vnd.google-earth.kmz&SRS=EPSG%3A4326&BBOX=-72.199045333399,40.121831318462,-63.954587564625,46.562813950316&WIDTH=512&HEIGHT=400
This also seems to be a problem on your thredds 4.5.4 server, so I conclude it's not some local configuration problem on my end. No time slider here with this kmz either:
http://thredds.ucar.edu/thredds/wms/grib/NCEP/WW3/Global/Best?LAYERS=Significant_height_of_combined_wind_waves_and_swell_surface&ELEVATION=0&TIME=2015-02-13T00:00:00.000Z,2015-02-14T00:00:00.000Z&TRANSPARENT=true&STYLES=boxfill%2Frainbow&CRS=EPSG%3A4326&COLORSCALERANGE=0.1%2C5.34&NUMCOLORBANDS=20&LOGSCALE=false&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&EXCEPTIONS=application%2Fvnd.ogc.se_inimage&FORMAT=application/vnd.google-earth.kmz&SRS=EPSG%3A4326&BBOX=-79.1578125,35.78125,-59.3578125,51.25&WIDTH=512&HEIGHT=400
When trying to open an erroneous HDF file where a numerical attribute is given by the number and a trailing space, a NumberFormatException is thrown. See below for an excerpt of an header, and note the trailing space in line 10.
The API could easily be more tolerant here by adding a trim() in line 258 in ucar.nc2.iosp.hdf4.HdfEos:
String sizeS = elem.getChild("Size").getText();
would become
String sizeS = elem.getChild("Size").getText().trim();
The getBitmap method in nc2.grib.grib1.Grib1SectionBitmap does not have an access modifier, making it default to package private.
Based on the grib2 version, it should be made public
At least for, TDS 4.3.22 and 4.5.4 war files the any reference to ncss configuratio entries in WEB-INF/web.xml is missing and also the ncss-servlet.xml file are missing.
Also this configuration is not in the the GitHub repositories for the previous branches and the 4.6.0.
Antonio
We are currently having an issue with the embedded ncWMS server in our thredds instance where date/time values reported in the GetCapabilities statement and in Godiva do not reflect the closest date/time to the value recorded in the netcdf dataset. I believe this is an issue with the netcdf java library as explained below, rather than with ncWMS itself. This is also an issue in standalone ncWMS and is causing us issues downstream where we are using the date/time reported by ncWMS for further work (its not just a cosmetic issue).
For example, for the netcdf file
http://thredds-6-nsp-mel.aodn.org.au/thredds/fileServer/IMOS/ACORN/gridded_1h-avg-current-map_QC/ROT/2014/03/13/IMOS_ACORN_V_20140313T023000Z_ROT_FV01_1-hour-avg.nc
the time value is 23447.104166666664 (days since 1950-01-01 00:00:00 UTC). If I use matlab to calculate the date/time value by adding this duration to the base date, matlab gives me 14-03-13 02:30 which is what we are expecting:
>> datestr(23447.104166666664 + datenum('01-01-1950 00:00:00'), 'dd-mmm-yyyy HH:MM:SS.FFF')
ans =
13-Mar-2014 02:30:00.000
nctools also reports the date/time as 2014-03-13 02:30 :
ggalibert@5-nsp-mel:/mnt/opendap/1/IMOS/opendap/ACORN/gridded_1h-avg-current-map_QC/ROT/2014/03/13$ ncdump -t -v TIME IMOS_ACORN_V_20140313T023000Z_ROT_FV01_1-hour-avg.nc | grep 'TIME = "' TIME = "2014-03-13 02:30" ;
But the embedded ncWMS reports the date/time as 2014-03-13 02:29:99.999:
Debugging this in ncWMS I found that ncWMS uses the ucar.nc2.time.CalendarDate add method to calculate this date/time.
The reason why ucar.nc2.time.CalendarDate add returns 2014-03-13 02:29:99.999 is because it casts the time in milliseconds calculated as a double to a long, in the process truncating it:
case Day:
return new CalendarDate(cal, dateTime.plus( (long) (value * 86400 * 1000) ));
it would make more sense to round it to the nearest millisecond as this gives us the closest date/time to the stored double value as follows:
case Day:
return new CalendarDate(cal, dateTime.plus( Math.round(value * 86400 * 1000) ));
(same applies to other calculations in this method).
Any chance of getting this fixed? I can prepare a pull request if you would like.
The doc about JoinExisting aggergations states that a coordinate must be exist. On way to do this is define a new corrdinate like this:
<netcdf xmlns="http://www.unidata.ucar.edu/namespaces/netcdf/ncml-2.2">
(1)<variable name="time" shape="time" type="int">
<attribute name="units" value="days since 2000-01-01"/>
<attribute name="_CoordinateAxisType" value="Time" />
(2) <values start="0" increment="1" />
</variable>
<aggregation dimName="time" type="joinExisting">
<netcdf location="file:/test/temperature/jan.nc" ncoords="31"/>
<netcdf location="file:/test/temperature/feb.nc" ncoords="28"/>
</aggregation>
</netcdf>
but this is not working. The coordinate definition must be inside the aggregation element:
<netcdf xmlns="http://www.unidata.ucar.edu/namespaces/netcdf/ncml-2.2">
<aggregation dimName="time" type="joinExisting">
(1) <variable name="time" shape="time" type="int">
<attribute name="units" value="days since 2000-01-01"/>
<attribute name="_CoordinateAxisType" value="Time" />
(2) <values start="0" increment="1" />
</variable>
<netcdf location="file:/test/temperature/jan.nc" ncoords="31"/>
<netcdf location="file:/test/temperature/feb.nc" ncoords="28"/>
</aggregation>
</netcdf>
It would be desirable to be able to configure thredds to use the long_name as a variable title rather than the standard_name.
I'm also getting Plumbr reports of "long locked thredds".
Are these useful or useless?
On 5 occasions threads were locked due to the same underlying problem. Total time for those locks was 40s:
The thread was waiting in synchronized block in uk.ac.rdg.resc.edal.cdm.LookUpTableGrid.generate() method line 88 for the java.util.HashMap lock to be released.
Full call stack for the waiting thread:
uk.ac.rdg.resc.edal.cdm.LookUpTableGrid.generate():88
uk.ac.rdg.resc.edal.cdm.CdmUtils.createHorizontalGrid():278
uk.ac.rdg.resc.edal.cdm.CdmUtils.readCoverageMetadata():173
uk.ac.rdg.resc.edal.cdm.CdmUtils.readCoverageMetadata():145
thredds.server.wms.ThreddsDataset.<init>():136
thredds.server.wms.ThreddsDataset.getThreddsDatasetForRequest():280
thredds.server.wms.ThreddsWmsController.dispatchWmsRequest():165
uk.ac.rdg.resc.ncwms.controller.AbstractWmsController.handleRequestInternal():200
org.springframework.web.servlet.mvc.AbstractController.handleRequest():153
org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle():48
org.springframework.web.servlet.DispatcherServlet.doDispatch():919
org.springframework.web.servlet.DispatcherServlet.doService():851
org.springframework.web.servlet.FrameworkServlet.processRequest():953
org.springframework.web.servlet.FrameworkServlet.doGet():844
javax.servlet.http.HttpServlet.service():621
org.springframework.web.servlet.FrameworkServlet.service():829
javax.servlet.http.HttpServlet.service():722
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter():305
org.apache.catalina.core.ApplicationFilterChain.doFilter():210
thredds.servlet.filter.RequestQueryFilter.doFilter():118
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter():243
org.apache.catalina.core.ApplicationFilterChain.doFilter():210
thredds.servlet.filter.RequestCORSFilter.doFilterInternal():49
org.springframework.web.filter.OncePerRequestFilter.doFilter():106
org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate():343
org.springframework.web.filter.DelegatingFilterProxy.doFilter():260
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter():243
org.apache.catalina.core.ApplicationFilterChain.doFilter():210
thredds.servlet.filter.RequestPathFilter.doFilter():94
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter():243
org.apache.catalina.core.ApplicationFilterChain.doFilter():210
thredds.server.RequestBracketingLogMessageFilter.doFilter():81
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter():243
org.apache.catalina.core.ApplicationFilterChain.doFilter():210
org.apache.catalina.core.StandardWrapperValve.invoke():225
org.apache.catalina.core.StandardContextValve.invoke():123
org.apache.catalina.authenticator.AuthenticatorBase.invoke():472
org.apache.catalina.core.StandardHostValve.invoke():168
org.apache.catalina.valves.ErrorReportValve.invoke():98
org.apache.catalina.valves.AccessLogValve.invoke():927
org.apache.catalina.core.StandardEngineValve.invoke():118
org.apache.catalina.connector.CoyoteAdapter.service():407
org.apache.coyote.http11.AbstractHttp11Processor.process():1001
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process():585
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run():310
java.util.concurrent.ThreadPoolExecutor.runWorker():1110
java.util.concurrent.ThreadPoolExecutor$Worker.run():603
java.lang.Thread.run():722
Call stack 1 (2 threads)
uk.ac.rdg.resc.edal.cdm.LookUpTableGrid.generate():88
uk.ac.rdg.resc.edal.cdm.CdmUtils.createHorizontalGrid():278
uk.ac.rdg.resc.edal.cdm.CdmUtils.readCoverageMetadata():173
uk.ac.rdg.resc.edal.cdm.CdmUtils.readCoverageMetadata():126
thredds.server.wms.ThreddsDataset.<init>():95
thredds.server.wms.ThreddsDataset.getThreddsDatasetForRequest():270
thredds.server.wms.ThreddsWmsController.dispatchWmsRequest():165
uk.ac.rdg.resc.ncwms.controller.AbstractWmsController.handleRequestInternal():200
org.springframework.web.servlet.mvc.AbstractController.handleRequest():153
org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle():48
org.springframework.web.servlet.DispatcherServlet.doDispatch():919
org.springframework.web.servlet.DispatcherServlet.doService():851
org.springframework.web.servlet.FrameworkServlet.processRequest():953
org.springframework.web.servlet.FrameworkServlet.doGet():844
javax.servlet.http.HttpServlet.service():621
org.springframework.web.servlet.FrameworkServlet.service():829
javax.servlet.http.HttpServlet.service():722
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter():305
org.apache.catalina.core.ApplicationFilterChain.doFilter():210
thredds.servlet.filter.RequestQueryFilter.doFilter():118
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter():243
org.apache.catalina.core.ApplicationFilterChain.doFilter():210
thredds.servlet.filter.RequestCORSFilter.doFilterInternal():49
org.springframework.web.filter.OncePerRequestFilter.doFilter():106
org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate():343
org.springframework.web.filter.DelegatingFilterProxy.doFilter():260
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter():243
org.apache.catalina.core.ApplicationFilterChain.doFilter():210
thredds.servlet.filter.RequestPathFilter.doFilter():94
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter():243
org.apache.catalina.core.ApplicationFilterChain.doFilter():210
thredds.server.RequestBracketingLogMessageFilter.doFilter():81
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter():243
org.apache.catalina.core.ApplicationFilterChain.doFilter():210
org.apache.catalina.core.StandardWrapperValve.invoke():225
org.apache.catalina.core.StandardContextValve.invoke():123
org.apache.catalina.authenticator.AuthenticatorBase.invoke():472
org.apache.catalina.core.StandardHostValve.invoke():168
org.apache.catalina.valves.ErrorReportValve.invoke():98
org.apache.catalina.valves.AccessLogValve.invoke():927
org.apache.catalina.core.StandardEngineValve.invoke():118
org.apache.catalina.connector.CoyoteAdapter.service():407
org.apache.coyote.http11.AbstractHttp11Processor.process():1001
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process():585
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run():310
java.util.concurrent.ThreadPoolExecutor.runWorker():1110
java.util.concurrent.ThreadPoolExecutor$Worker.run():603
java.lang.Thread.run():722
The FMRC should remove the original time coordinate, as it no longer makes sense in the aggregation. for example, the ocean_time
variable that is still present in this best time series
FMRC aggregation:
http://geoport.whoi.edu/thredds/dodsC/coawst_4/use/fmrc/coawst_4_use_best.ncd.html
In NetCDF 4.2, I do the following, which works:
FileInputStream is = new FileInputStream(testPath);
DataInputStream dis = new DataInputStream(is);
byte[] buf = new byte[(int) is.getChannel().size()];
dis.readFully(buf);
ncf = NetcdfFile.openInMemory("in-mem file", buf);
However, in NetCDF 4.3.19, this now fails because opening the "file" seems to force the creation of an index on disk, and something goes haywire in this process. See stack trace below.
Any way to disable index creation, or a better strategy to use?
My use case is one where I'm handed the bytes of a Grib file, which doesn't necessarily exist anywhere on disk.
Thanks.
java.lang.StringIndexOutOfBoundsException: String index out of range: -37
at java.lang.String.substring(String.java:1937)
at java.lang.String.substring(String.java:1904)
at ucar.nc2.grib.GribCollectionBuilder.makeFiles(GribCollectionBuilder.java:39)
at ucar.nc2.grib.grib2.Grib2CollectionBuilder.createIndex(Grib2CollectionBuilder.java:610)
at ucar.nc2.grib.grib2.Grib2CollectionBuilder.createIndex(Grib2CollectionBuilder.java:435)
at ucar.nc2.grib.grib2.Grib2CollectionBuilder.readOrCreateIndex(Grib2CollectionBuilder.java:148)
at ucar.nc2.grib.grib2.Grib2CollectionBuilder.readOrCreateIndexFromSingleFile(Grib2CollectionBuilder.java:76)
at ucar.nc2.grib.GribIndex.makeGribCollectionFromSingleFile(GribIndex.java:122)
at ucar.nc2.grib.grib2.Grib2Iosp.open(Grib2Iosp.java:311)
at ucar.nc2.NetcdfFile.<init>(NetcdfFile.java:1521)
at ucar.nc2.NetcdfFile.open(NetcdfFile.java:813)
at ucar.nc2.NetcdfFile.openInMemory(NetcdfFile.java:719)
at com.windlogics.dmf.weather.HadoopGribCrackerTest.testFileOpening(HadoopGribCrackerTest.java:37)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:154)
at junit.framework.TestCase.runBare(TestCase.java:127)
at junit.framework.TestResult$1.protect(TestResult.java:106)
at junit.framework.TestResult.runProtected(TestResult.java:124)
at junit.framework.TestResult.run(TestResult.java:109)
at junit.framework.TestCase.run(TestCase.java:118)
at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
Hi,
I would like to expose a problem we have getting datasets data from our Thredds instance using netcdf-all.
We have tools developed in R to execute netcdf-all and to download data with our tool we define the following:
HTTPBasicProvider bp = new HTTPBasicProvider("user", "password");
HTTPSession.setGlobalCredentialsProvider(bp);
Then, we open the dataset:
GridDataset gd = GridDataset.open("URL_TO_DATASET");
and after doing that, we execute GeoGrid.makeSubset and GridDataType.readDataSlice many times which create new requests to Thredds.
The problem is the way requests are developed forces Thredds to create a new session and re-authenticate in each request because the cookie store reference is lost. This behavior floods our Tomcat creating up to 200 sessions per user at the same time to download the data. I checked the code deeply in order to find what was happening:
For example, each time DConnect2.openConnection is called, a new session is generated and the cookies from the last session are lost because you get cookies from SessionClient which is redefined in each new session.
As you know, there are so much code deprecated in the classes I mentioned and in my opinion the client must be replaced by the new CloseableHttpClient and created through its builder. Another thing is the amount of code these clases have makes them difficult to maintain.
The Upgrading to TDS 4.5 document doesn't seem to be available from the current TDS page, which just includes the Upgrading to TDS 4.6 document. The current stable -> dev upgrade path includes the 4.5 changes. In particular, the breaking NCSS path changes (removal of /grid
) aren't mentioned in the 4.6 upgrade doc, and old 4.3.x style NCSS URLs which include /grid
result in blank 200 responses from TDS 4.6 which is confusing. Please link the 4.5 upgrade document from the main page, or alternatively link it from the 4.6 upgrade document.
The datasets that feature collections are generating are not applying the restrictAccess
attribute
As example:
<dataset name="NCEP's CFSv2 Datasets" ID="cfsrrdatasets" restrictAccess="cfsrr">
<featureCollection name="cfsAgg" featureType="FMRC" path="cfs/agg">
<collection spec="/hiddendirectory/**/.+\.grb2$"
dateFormatMark="yyyyMMddHH#.time.grb2#"/>
<update startup="false" trigger="allow"/>
<fmrcConfig regularize="true" datasetTypes="TwoD"/>
</featureCollection>
<dataset name="CFS File Dataset" ID="cfs/cfs.grib2" urlPath="cfs/tmax/198201/tmax.1982010106.time.grb2"/>
</dataset>
the dataset generated for the featureCollection
has the property RestrictAccess
but the redirection to restrictedAccess is not been made and therefore not restriction is applied.
You can see this online:
[http://meteo.unican.es/tds5/catalog/cfs/agg/catalog.html]
But the dataset
generated ad-hoc is applying the restriction and therfore the autentication challenge is made:
[http://meteo.unican.es/tds5/catalogs/cfs/cfsDatasets.html?dataset=cfs/cfs.grib2]
Hi all,
I have an application that stores metadata in the HDF5 format, but the real "raw" data in a separate file. The HDF5 file refers to the raw file. I would like to use this library to parse the metadata. I don't have to read the external raw file, just the metadata in the HDF5. However, NetcdfFile.open bails out with this stack trace:
***_UNPROCESSED MESSAGE type = ExternalDataFiles(7) raw = 7
java.io.IOException: java.lang.UnsupportedOperationException: ***_UNPROCESSED MESSAGE type = ExternalDataFiles(7) raw = 7
at ucar.nc2.NetcdfFile.open(NetcdfFile.java:425)
at ucar.nc2.dataset.NetcdfDataset.openOrAcquireFile(NetcdfDataset.java:699)
at ucar.nc2.dataset.NetcdfDataset.openFile(NetcdfDataset.java:560)
at ucar.nc2.NCdumpW.print(NCdumpW.java:168)
at ucar.nc2.NCdumpW.main(NCdumpW.java:982)
Caused by: java.lang.UnsupportedOperationException: ****UNPROCESSED MESSAGE type = ExternalDataFiles(7) raw = 7
at ucar.nc2.iosp.hdf5.H5header$HeaderMessage.read(H5header.java:2714)
at ucar.nc2.iosp.hdf5.H5header$DataObject.readMessagesVersion2(H5header.java:2420)
at ucar.nc2.iosp.hdf5.H5header$DataObject.(H5header.java:2306)
at ucar.nc2.iosp.hdf5.H5header$DataObject.(H5header.java:2180)
at ucar.nc2.iosp.hdf5.H5header.getDataObject(H5header.java:2027)
at ucar.nc2.iosp.hdf5.H5header.access$600(H5header.java:70)
at ucar.nc2.iosp.hdf5.H5header$DataObjectFacade.(H5header.java:2073)
at ucar.nc2.iosp.hdf5.H5header.readGroupNew(H5header.java:3920)
at ucar.nc2.iosp.hdf5.H5header.access$900(H5header.java:70)
at ucar.nc2.iosp.hdf5.H5header$H5Group.(H5header.java:2153)
at ucar.nc2.iosp.hdf5.H5header$H5Group.(H5header.java:2118)
at ucar.nc2.iosp.hdf5.H5header.makeNetcdfGroup(H5header.java:472)
at ucar.nc2.iosp.hdf5.H5header.makeNetcdfGroup(H5header.java:473)
at ucar.nc2.iosp.hdf5.H5header.read(H5header.java:215)
at ucar.nc2.iosp.hdf5.H5iosp.open(H5iosp.java:128)
at ucar.nc2.NetcdfFile.(NetcdfFile.java:1521)
at ucar.nc2.NetcdfFile.open(NetcdfFile.java:813)
at ucar.nc2.NetcdfFile.open(NetcdfFile.java:422)
... 4 more
Now, I don't need netcdf to understand about the external file, but it would be nice if it would just ignore it or maybe log a warning. Or, even better, support the ExternalDataFiles.
Is there any way to work around this? Or maybe it is relatively easy to patch?
Kind regards,
Rob van Nieuwpoort
Netherlands eScience center
[email protected]
We are using NcML to apply CF 1.6 discrete sampling geometry conventions to some time series data. This involves removing some variables that have a depth dimension so that we have only featureType="timeSeries" variables in the dataset.
When we try the NcML locally, it works fine, but when we put the NcML in our TDS, it fails to remove the depth-dependent variables.
It seems to be a problem with the order of the dimensions, because if we write the NcML with dimension order (station,time) instead of (time,station) it works.
XML TDS catalog with two identical datasets, but with (time,station) and (station,time)
https://gist.github.com/rsignell-usgs/5484245
NcML: (time,station)
https://gist.github.com/rsignell-usgs/5484269
-Rich
netcdf-4.3.22 is deployed to Maven Central at http://search.maven.org/#browse%7C2058465994 but its dependency thredds-parent-4.3.22 is not. Can this be rectified please?
On the OPeNDAP Dataset Access Form, when I tick a variable, I can see that the indices for its relevant dimension are well set by default to cover the full extent of that dimension, but only when this variable (try with DEPTH) has only one dimension like here :
When it has more than 1 dimension, it doesn't seem to behave successfully and suggests a default [0:1:0] extent for each of the dimensions :
In the 4.3.22 version the HTTPSSLProvider is working but in 4.5.3 it has stopped to work.
The command line flag for the JVM
-Dkeystore
-Dkeystorepassword
-Dtruststore
-Dtruststorepassword
are working and the HTTPSSLProvider is registered in the HTTPSession class but the SSL context used is not loading the truststore and the keystore provided.
If -Djavax.net.debug=all is enabled in the JVM you can see that the truststore loaded is the system default one. This can be overloaded with the JSSE flags
-Djavax.net.ssl.trustStore
-Djavax.net.ssl.trustStorePassword
-Djavax.net.ssl.keyStore
-Djavax.net.ssl.keyStorePassword
and the provided truststore is loaded instead the default one
But the keyStore param it's not working.
I'm running Plumbr which monitors our thredds server for problems. I got this heap leak reported on our TDS Version 4.6.0 - 20150326.1318 and I'm providing it here in case it's useful. Is this useful, or useless?
The memory leak detected is caused by a programming error.
The following information indicates the location of the leak in the source code
At the time the leak was detected, 244 objects existed of class
java.awt.image.BufferedImage
These objects had been created at
uk.ac.rdg.resc.ncwms.graphics.ImageProducer.createImage(uk.ac.rdg.resc.ncwms.graphics.ImageProducer$Components, java.lang.String):243
and were being held
in elementData of java.util.ArrayList
in renderedFrames of uk.ac.rdg.resc.ncwms.graphics.ImageProducer
in a local variable defined in uk.ac.rdg.resc.ncwms.controller.AbstractWmsController.getMap()
The netcdf-java client applictions/utils are not storing cookies sent back by server.
The cookies returned from server are discarded creating a new session on every connection to server (DODS or HTTP).
For example, if user autentication is required, this makes that every request to server create a new session and therefore a new athentication process.
Folks-
Not sure if this is a THREDDS issue or not, but please steer me in the right direction.
For an adcp file with samples between 6 and -0.5 (6 meters deep to just above the surface), I think I have positive-down in the right place in my ncml, but the geospatial metadata doesn't seem to pick up on it.
For my files with multiple depths, I put this in my geospatial coverage (metadata inherited=true):
6.044800
6.5
meters
Then have this in my depth variable:
but end up with the wrong thing (Altitude should be 6.0488 to -0.5 (positive is DOWN)) in geospatial bounds on the dataset page:
GeospatialCoverage:
Longitude: 13.7566 to 13.7566 degrees_west
Latitude: 43.296902 to 43.296902 degrees_north
Altitude: 6.0448 to 12.5448 meters (positive is up)
for an example see: http://geoport-dev.whoi.edu/thredds/ts/EUROSTRATAFORM.html?dataset=EUROSTRATAFORM/7011adc-a_2d.nc
Do I need to tell the geospatial bounds metadata that positive is down?
Thanks for any suggestions!
Best, Ellyn
Currently thredds-parent is set to version 4.5.0-SNAPSHOT, while the modules still use 4.3.19. This breaks builds for users without thredds-parent 4.3.19 already in their local repositories, since the Unidata repositories are configured in thredds-parent and thredds-parent 4.3.19 isn't in the normally searched repos. Upgrading all modules to use thredds-parent 4.5.0-SNAPSHOT will fix.
According to the ACDD Mappings, the ISO 19115-2 fileIdentifier
is supposed to be computed from the combination of the netcdf attributes id
and naming_authority
.
Yet if we look at a dataset where both naming_authority
and id
are specified, we find that currently this is not working:
While the OPeNDAP Dataset Access Form specifies:
id: roms_hiog_forecast
naming_authority: org.pacioos
when we look at the ISO record we see
<gmd:fileIdentifier>
<gco:CharacterString>roms_hiog_forecast/ROMS_Oahu_Regional_Ocean_Model_best.ncd</gco:CharacterString>
</gmd:fileIdentifier>
which is just the THREDDS path, with no naming authority information.
It looks like there are two problems with the UnidataDD2MI.xsl file.
id
first checks to see if there is a THREDDS ID, and then checks to see if there is a netcdf id
attribute. It should be the other way around, because there will always be THREDDS ID (the pathname), and therefore the netcdf id
, if present, will never be used. See these linesfileIdentifier
is being constructed using the id
only, without the naming_authority
information. See these linesI am trying to configure Thredds in order to add SSL to the login. I configured my web.xml as suggested here http://www.unidata.ucar.edu/software/thredds/v4.5/tds/reference/RestrictedAccess.html adding "useSSL=true" and "transport-guarantee = CONFIDENTIAL" in the restrictedAccess security constraint.
My Tomcat's server.xml is also configured to expose the SSL connector:
After configuring that, I try to access to the test dataset (Protected by a role) through the OpenDap service and I'm successfully redirected to the log in (SSL). Nevertheless, after entering my credentials, firefox warns about a circular redirect.
I have debugged this process many times and searched in Google trying to find what is happening. If the session is created with a secured cookie in the https context then when you switch back to HTTP it loses the session. And this make sense when you debug all the process.
When the request is in the TomcatAuthorizer.authorize method, it has a non-secured session which is previously created and some new attributes are added. It is also added a header and then redirects to https://localhost:8443/restrictedAccess/myrole
When credentials are entered, TomcatAuthorizer.doGet is called and tries to find the role. I can see the user’s grants and everything seems to be OK, nevertheless the session id is different. The request is redirected to the original URL. After being redirected, the request goes to the TomcatAuthorizer.authorize but it hasn’t the same session (non-secured) so req.isUserInRole(role) is false and the process begins again and again. Obviously the browser warns you about this situation. If you repeat this process without adding ssl, the second redirect has the role and shows the data because they have the same session.
I don’t know if I’m doing something wrong or something is missing. I'm using the 4.5.5 version. Could you help me?
Best regards,
Manuel Vega.
University of Cantabria.
In ucar.units.SI, the symbol for degrees celcius is specified as "Cel":
https://github.com/Unidata/thredds/blob/master/udunits/src/main/java/ucar/units/SI.java#L520
However, in the udunits2 xml files, the symbol is specified as "°C":
http://www.unidata.ucar.edu/software/udunits/udunits-2/udunits2-derived.xml
In addition, there is no alias defined in SI for "°C", so trying to parse "°C" with a ucar.units.StandardUnitFormat fails.
In a change from previous versions (4.3.x), TDS seems to have lost the ability to serve files in the public
content directory other than css, gif, and jpg files. The description in the docs doesn't mention this limitation.
NOTE: The best way to use your own logo is to put it in the ${tomcat_home}/content/thredds/public/ directory, and specify it in serverInformation as /thredds/
This seems to be the code restricting the extensions via a spring RequestMapping pattern.
It also seems that subdirectories of public
can't be used anymore either (e.g. images
subdirectory). These two changes bit us when upgrading to 4.6.0 from 4.3.20 as we used to serve a PNG logo image from public/images/header.png
.
Hi,
the webpage http://www.unidata.ucar.edu/software/netcdf-java/reference/netcdf4Clibrary.html links to nc4_64_dll.zip and mentions the deps directory. This directory is not present int he zip file however, and searching for the correct version of all dlls is quite tricky. Is it possible to put the deps folder in the zip, or at least put a link on there to the correct downloads?
http://www.unidata.ucar.edu/software/netcdf/win_netcdf/nc4_64_dll.zip
Thanks
I would like to be able to sequentially read records from a large grib2 file (too large to read all at once) and extract data for a particular variable. I use NetcdfFile.openInMemory(String, byte []) to do that. However, because even for small records the call writes out an index file to disk reading the whole file takes hours.
Is there any way to disable the index file creation? Or any way to circumvent openInMemory() by using some low-level methods?
WMS is greatly enhanced in TDS 4.6, but there still seem to be a few lingering WMS/Godiva2 issues.
I tried going to thredds.ucar.edu/thredds, which is at TDS 4.6.1:
and I selected the 40km NAM, then selected potential temperature @ fixed height above ground
and clicked auto
. There are three problems I identified:
auto
again, you get no plot.(using netcdf4-4.5.1)
REPRO:
mkdir -p ~/temp && cd ~/temp
wget "http://www.ftp.ncep.noaa.gov/data/nccf/com/gfs/prod/gfs.2014071900/gfs.t00z.master.grbf00.10m.uv.grib2"
public final class Driver {
public static void main(String[] args) throws Exception {
ucar.nc2.NetcdfFile.open("gfs.t00z.master.grbf00.10m.uv.grib2");
}
}
~/temp
).RESULT:
Caused by: java.lang.NullPointerException
at ucar.nc2.util.DiskCache2.canWrite(DiskCache2.java:256)
at ucar.nc2.util.DiskCache2.getFile(DiskCache2.java:236)
at ucar.nc2.grib.collection.GribCollection.getFileInCache(GribCollection.java:203)
at ucar.nc2.grib.grib2.Grib2Index.readIndex(Grib2Index.java:119)
at ucar.nc2.grib.GribIndex.readOrCreateIndexFromSingleFile(GribIndex.java:100)
at ucar.nc2.grib.collection.Grib2CollectionBuilder.makeGroups(Grib2CollectionBuilder.java:115)
at ucar.nc2.grib.collection.GribCollectionBuilder.createIndex(GribCollectionBuilder.java:117)
at ucar.nc2.grib.collection.GribCdmIndex.openGribCollectionFromDataFile(GribCdmIndex.java:632)
at ucar.nc2.grib.collection.GribCdmIndex.openGribCollectionFromDataFile(GribCdmIndex.java:616)
at ucar.nc2.grib.collection.GribCdmIndex.makeGribCollectionFromRaf(GribCdmIndex.java:585)
at ucar.nc2.grib.collection.GribIosp.open(GribIosp.java:181)
at ucar.nc2.NetcdfFile.<init>(NetcdfFile.java:1527)
at ucar.nc2.NetcdfFile.open(NetcdfFile.java:819)
at ucar.nc2.NetcdfFile.open(NetcdfFile.java:427)
... 4 more
To workaround this bug, use an absolute path to open the file:
ucar.nc2.NetcdfFile.open("/home/bob/temp/gfs.t00z.master.grbf00.10m.uv.grib2");
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.