Giter VIP home page Giter VIP logo

fdsn's Introduction

FDSN

Federation of Digital Seismic Networks (FDSN) Web Services (FDSN-WS) http://www.fdsn.org/webservices/

Refer to notes in cmd/*/deploy/DEPLOY.md for specific deployment requirements.

Applications

fdsn-ws

Provides FDSN web services.

Dataselect

FDSN dataselect has been implemented, querying and serving data from miniseed files off an Amazon S3 bucket.

This example uses Curl to download data from a single query to the file test.mseed:

curl "http://localhost:8080/fdsnws/dataselect/1/query?network=NZ&station=CHST&location=01&channel=LOG&starttime=2017-01-09T00:00:00&endtime=2017-01-09T23:00:00" -o test.mseed

This example uses multiple queries using POST, in this case saving to test_post.mseed:

curl -v --data-binary @post_input.txt http://localhost:8080/fdsnws/dataselect/1/query -o test_post.mseed

The contents of post_input.txt:

NZ ALRZ 10 EHN 2017-01-09T00:00:00 2017-01-09T02:00:00
NZ ALRZ 10 AC* 2017-01-02T00:00:00 2017-01-10T00:00:00
NZ ALRZ 10 B?  2017-01-09T00:00:00 2017-01-10T00:00:00

fdsn-quake-consumer

Receives notifications for SeisComPML (SC3ML) event data uploads to S3 and stores the SC3ML in the DB.

The following versions of SC3ML can be handled (there is no difference between the quake content for these versions but they are handled with separate XSLT to be consistent with upstream changes):

  • 0.7
  • 0.8
  • 0.9

The only version of QuakeML created and stored is 1.2

fdsn-holdings-consumer

Receives notifications for miniSEED file uploads to S3, indexes the files, and saves the results to the holdings DB.

Test tool

A test bash script fdsn-batch-test.sh which loads URLs from fdsn-test-urls.txt, can used to test against both FDSN and FDSN-NRT web service.

fdsn-batch-test.sh

./fdsn-batch-test.sh {optional_test_fdsn_service}

Example: ./fdsn-batch-test.sh https://test.geonet.org.nz.

When {optional_test_fdsn_service} is omitted, the script will test against https://service.geonet.org.nz and https://service-nrt.geonet.org.nz. When {optional_test_fdsn_service} is present, the script tests against that service with the same URLs defined in the txt file - note the time range will be based on NRT (20 minutes ago).

fdsn-test-urls.txt

Update fdsn-test-urls.txt to change the URLs to test.

For tests expecting http response status code other than 200, append ;;{expected_http_code} after the URL, for example:

http://service.geonet.org.nz/fdsnws/dataselect/1/query?network=NZ&sta=RBCT&channel=????&starttime=2018-05-15T23:45:00&endtime=2018-05-15T23:45:10;;204

The above ;;204 suffix makes test script to check whether the responsed http status is 204.

fdsn's People

Contributors

bobymcbobs avatar bpeng avatar callumnz avatar crleblanc avatar danieldooley avatar dependabot[bot] avatar francovm avatar jenlowe avatar junghao avatar nbalfour avatar quiffman avatar sorennh avatar sue-h-gns avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fdsn's Issues

Service limits and HTTP errors. Documentation?

I've been trying to find an answer as to how FDSN handles preparing large requests and sending valid HTTP/1.1 responses to the client. IRIS seem to not bother - they send a 200 response but preparing and sending the response can still fail. There would be no indication to the client that a request failed. Basically a weakness of the FDSN spec and how it uses HTTP/1.1

We have the same problem and need to document this somehow.

IRIS's explanation below is from "Considerations" https://service.iris.edu/fdsnws/dataselect/docs/1/help/

In general, it is preferable to not ask for too much data in a single request. Large requests take longer to complete. If a large request fails due to any networking issue, it will have to be resubmitted to be completed. This will cause the entire request to be completely reprocessed and re-transmitted. By breaking large requests into smaller requests, only the smaller pieces will need to be resubmitted and re-transmitted if there is a networking problem. Web service network connections will break after 5 to 10 minutes if no data is transmitted. For large requests, the fdsnws-dataselect web service can take several minutes before it starts returning data. When this happens, the web service may “flush” the HTTP headers with an “optimistic” success (200) code to the client in order to keep the network connection alive. This gives about 10 minutes to the underlying data retrieval mechanism to start pulling data out of the IRIS archive. Thus for larger requests, the HTTP return code can be unreliable. As data is streamed back to the client, the fdsnws-dataselect service partially buffers the returned data. During time periods when the underlying retrieval mechanism stalls, the web service will dribble the partial buffer to the client in an effort to keep the network connection alive.

It is less efficient to ask for too little data in each request. Each time a request is made, a network connection must be established and a request processing unit started. For performance reasons, it is better to group together selections from the same stations and place them in the same request. This is especially true of selections that cover the same time periods.

This utility should handle a week or month of data from several stations.

Problems with FDSN station service in obspy

I have been playing with the station service and queries are broken in obspy, but not in the URL, but they return different things with the URL, but that is not surprising. Anyway, it summary I can't break the URL but obspy is broken. I am happy to come and sit with you to look over things to see if it's anything I can help with.

URL query http://service.geonet.org.nz/fdsnws/station/1/query?station=A*&location=20
returns 12 stations

URL query http://beta-service.geonet.org.nz/fdsnws/station/1/query?station=A*&location=20
returns 109 stations

Python Obspy query
inventory = client.get_stations(station="A*",location="20")

with regular client "GeoNet" linked to service.geonet.org.nz returns:

	Created by: IRIS WEB SERVICE: fdsnws-station | version: 1.1.25
		    http://service.iris.edu/fdsnws/station/1/query?format=xml&location=...
	Sending institution: IRIS-DMC (IRIS-DMC)
	Contains:
		Networks (4):
			CU
			IU
			TA
			US
		Stations (18):
			CU.ANWB (Willy Bob, Antigua and Barbuda)
			IU.ADK (Adak, Aleutian Islands, Alaska)
			IU.ADK (Adak, Aleutian Islands, Alaska)
			IU.AFI (Afiamalu, Samoa)
			IU.AFI (Afiamalu, Samoa)
			IU.AFI (Afiamalu, Samoa)
			IU.ANMO (Albuquerque, New Mexico, USA)
			IU.ANMO (Albuquerque, New Mexico, USA)
			IU.ANMO (Albuquerque, New Mexico, USA)
			IU.ANMO (Albuquerque, New Mexico, USA)
			IU.ANTO (Ankara, Turkey)
			TA.A21K (Barrow, AK, USA)
			TA.A36M (Sachs Harbour, NT, CAN)
			US.AAM (Ann Arbor, Michigan, USA)
			US.ACSO (Alum Creek State Park, Ohio, USA)
			US.AGMN (Agassiz National Wildlife Refuge, Minnesota, USA)
			US.AHID (Auburn Hatchery, Idaho, USA)
			US.AMTX (Amarillo, Texas, USA)
		Channels (0):

While beta-service.geonet.org.nz returns:

---------------------------------------------------------------------------
FDSNException                             Traceback (most recent call last)
<ipython-input-36-8565ad0a3a25> in <module>()
      9 
     10 #inventory = client.get_stations(starttime=starttime,endtime=endtime,latitude=-42.693,longitude=173.022,maxradius=0.5, level="channel")
---> 11 inventory = client.get_stations(station="A*",location="20")
     12 #inventory = client.get_stations(starttime="2016-11-13 11:00:00.000", endtime="2016-11-14 11:00:00.000",network="NZ", location="20", level="channel")
     13 print(inventory)

C:\Users\natalieb\AppData\Local\Continuum\Anaconda3\lib\site-packages\obspy\clients\fdsn\client.py in get_stations(self, starttime, endtime, startbefore, startafter, endbefore, endafter, network, station, location, channel, minlatitude, maxlatitude, minlongitude, maxlongitude, latitude, longitude, minradius, maxradius, level, includerestricted, includeavailability, updatedafter, matchtimeseries, filename, format, **kwargs)
    611             "station", DEFAULT_PARAMETERS['station'], kwargs)
    612 
--> 613         data_stream = self._download(url)
    614         data_stream.seek(0, 0)
    615         if filename:

C:\Users\natalieb\AppData\Local\Continuum\Anaconda3\lib\site-packages\obspy\clients\fdsn\client.py in _download(self, url, return_string, data, use_gzip)
   1340             msg = ("Bad request. If you think your request was valid "
   1341                    "please contact the developers.")
-> 1342             raise FDSNException(msg, server_info)
   1343         elif code == 401:
   1344             raise FDSNException("Unauthorized, authentication required.",

FDSNException: Bad request. If you think your request was valid please contact the developers.

FDSN beta service waveforms do not begin and end at requested times

Shifted from GeoNet/help#41

This actually holds true for the current fdsn service as well. The below example uses the obspy fdsn client, which appears to pass arguments properly to urls:

from obspy.clients.fdsn import Client
from obspy import UTCDateTime

client = Client('http://beta-service.geonet.org.nz')
st = client.get_waveforms(
    network='NZ', station='CNGZ', location='*', channel='EHZ',
    starttime=UTCDateTime('2019-09-04T04:00:0:0.000000'), 
    endtime=UTCDateTime('2019-09-04T05:00:00.000000'))
print(st)
1 Trace(s) in Stream:
NZ.CNGZ.10.EHZ | 2016-09-04T04:00:01.308441Z - 2016-09-04T04:59:56.868441Z | 100.0 Hz, 359557 samples

While this queries the database accurately (generating the url: http://beta-service.geonet.org.nz/fdsnws/dataselect/1/query?channel=EHZ&station=CNGZ&starttime=2016-09-04T04%3A00%3A00.000000&location=%2A&endtime=2016-09-04T05%3A00%3A00.000000&network=NZ%27

The returned data do not start at the requested start-time, nor end at the requested end-time.

It looks like the FDSN spec states that data can start at the start-time or after (and vice-versa for end-time), but, other services provide data closer to that expected. It would be really nice if the GeoNet FDSN did the same.

I say that this holds for the current FDSN, but that isn't quite true: a similar request using the current FDSN service yields the following data:

1 Trace(s) in Stream:
NZ.CNGZ.10.EHZ | 2016-09-04T03:59:54.568443Z - 2016-09-04T05:00:02.408443Z | 100.0 Hz, 360785 samples

In this case, the data do not appear to meet the FDSN specs, both starting before the given time, and ending after the end-time.

Further to this - a more general question, why do GeoNet data not have samples at zero-millisecond times (e.g. the closest sample to the given start-time is actually at 2016-09-04T03:59:59.998443). Is this an accumulated leap-second thing?

Reindexing the archive

The fdsn-beta service currently has 2016 data for testing. I'm about to start indexing the full GeoNet archive so fdsn-beta may behave badly for 2016 while this process is carried out.

I will close this issue when I'm done.

data select service limits

Are there any "sensible" service limits for the data select service? For example - max request size of 24 hours or something like that?

@quiffman may have some insight about current FDSN usage patterns.

It seems to me that if users really want a lot of data we should ask them to get in touch and encourage them to use an S3 client.

Error with Swarm requesting station config

I've tried the station service with Swarm. I get the following error - so this is a client sending a request (we can't change it) is it sending a valid starttime? We should try to make this work on the server side.

2017-06-01 10:52:32  WARN - could not get channels: Error in connection with url: http://beta-service.geonet.org.nz/fdsnws/station/1/query?&format=text&level=station&network=NZ&starttime=2017-05-31T22:52:32.771

SQS alarms

There are SQS queues for archive upload notification messaging. This includes dead letter queues. We need to decide what (if anything) to alarm on and where to send the notification in alarm state.

Typical alarm states that are worth notifying for are:

  • any messages on the dead letter queue (dlq). These could not be processed for some reason and after a small number of delivery attempts are moved to the dlq. I have had issues with alarms on dlq sitting in an "insufficient data state" because nothing is happening.
  • messages backing up on a main messaging queue which can mean the consumer has stopped or is not keeping up for some reason.

@quiffman @ozym @nbalfour - it would be good to have some discussion about what to do then we can get this ticket to a ready state.

Station response issues

This is an issue that I haven't had time to get my head around but I will provide as much information about the problem that I can.

Firstly, where I used python I used python3.

My first test was to attach the response metadata to the waveform data I collected try to remove the response.

from obspy import UTCDateTime
from obspy.clients.fdsn import Client as FDSN_Client

client = FDSN_Client("http://beta-service.geonet.org.nz/")
t = UTCDateTime("2016-09-01T16:37:00.000")
st = client.get_waveforms("NZ", "TDHS","20", "?N?", t, t + 300,attach_response=True)
pre_filt = (0.005, 0.006, 30.0, 35.0)
st.remove_response(output='ACC', pre_filt=pre_filt)
st.plot()

This is the error it returned:

---------------------------------------------------------------------------
ObsPyException                            Traceback (most recent call last)
<ipython-input-1-a005388685ac> in <module>()
     10 st = client.get_waveforms("NZ", "TDHS","20", "?N?", t, t + 300,attach_response=True)
     11 pre_filt = (0.005, 0.006, 30.0, 35.0)
---> 12 st.remove_response(output='ACC', pre_filt=pre_filt)
     13 st.plot()
     14 pga = st.max()

C:\Users\natalieb\AppData\Local\Continuum\Anaconda3\lib\site-packages\obspy\core\stream.py in remove_response(self, *args, **kwargs)
   3029         """
   3030         for tr in self:
-> 3031             tr.remove_response(*args, **kwargs)
   3032         return self
   3033 

<decorator-gen-162> in remove_response(self, inventory, output, water_level, pre_filt, zero_mean, taper, taper_fraction, plot, fig, **kwargs)

C:\Users\natalieb\AppData\Local\Continuum\Anaconda3\lib\site-packages\obspy\core\trace.py in _add_processing_info(func, *args, **kwargs)
    230     info = info % "::".join(arguments)
    231     self = args[0]
--> 232     result = func(*args, **kwargs)
    233     # Attach after executing the function to avoid having it attached
    234     # while the operation failed.

C:\Users\natalieb\AppData\Local\Continuum\Anaconda3\lib\site-packages\obspy\core\trace.py in remove_response(self, inventory, output, water_level, pre_filt, zero_mean, taper, taper_fraction, plot, fig, **kwargs)
   2703         freq_response, freqs = \
   2704             response.get_evalresp_response(self.stats.delta, nfft,
-> 2705                                            output=output, **kwargs)
   2706 
   2707         if plot:

C:\Users\natalieb\AppData\Local\Continuum\Anaconda3\lib\site-packages\obspy\core\inventory\response.py in get_evalresp_response(self, t_samp, nfft, output, start_stage, end_stage)
    768             msg = ("Can not use evalresp on response with no response "
    769                    "stages.")
--> 770             raise ObsPyException(msg)
    771 
    772         import obspy.signal.evrespwrapper as ew

ObsPyException: Can not use evalresp on response with no response stages.

This sort of makes sense in that when I do the same request using the URL
http://beta-service.geonet.org.nz/fdsnws/station/1/query?station=TDHS&channel=HNZ&level=response
It has no response stage information.

  • Can someone please check that the input StationXML for beta is actually missing this information. If it is provided in the input StationXML then FDSN is doings something odd that removes it.

I then did the same thing with a different station... KIKS
It returns the response information for the URL query (below) but still gets the same error with obspy.
'http://beta-service.geonet.org.nz/fdsnws/station/1/query?station=KIKS&channel=HNZ&level=response'

My impression is that there are a couple of problems going on here.

Time handling for station service - tests needed

We're making progress - the time parsing from swarm now works but I think the result set might not be correct.

If I query

http://beta-service.geonet.org.nz/fdsnws/station/1/query?&format=text&level=channel&network=NZ&starttime=2017-06-01T04:08:18.636

I get no stations - I expect to get all the stations that are open "now".

If I query

http://beta-service.geonet.org.nz/fdsnws/station/1/query?&format=text&level=channel&network=NZ&starttime=2015-06-01T04:08:18.636

I get a small set of stations. Is the query currently returning stations that change from closed to open instead of stations that are open at the time (but maybe opened earlier)?

Update FDSN webpage

Update FDSN webpage and include the following information:

  • Supported parameters for each service
  • Bug in SC3 that causes issues when using FDSN (geonet) as a datasource
  • Variation from specifications for dataselect, includes starttime (or before) and endtime (or after)

Deploys

Need to redeploy (with config, role, and name changes):

  • fdsn-holdings-consumer
  • fdsn-quake-consumer
  • fdsn-ws

Need to clean up ecr, logs, and roles:

  • fdsn-s3-consumer

shorthand for queries not functional in beta

hi-- i realise this is trivial but none of the abbreviations (e.g. "minlat" or "start" instead of "minlatitude" or "starttime") are working in beta. the mildest of irritations.

Tool to generate bucket upload notifications

Hi Howard,

Please could your write a command line tool in this repo? I suggest the name cmd/s3-notify. We need to reindex the miniSEED data. The goal of s3-notify is to list all the keys matching a prefix in a bucket and then for each object send a notification to an SQS queue.

Only a partially notification message should be needed in the JSON format used by the consumer, that is here https://github.com/GeoNet/fdsn/blob/master/internal/platform/s3/notification.go

In case it's useful the full message format is here http://docs.aws.amazon.com/AmazonS3/latest/dev/notification-content-structure.html

And there is an example at the end of this issue.

There will be hundreds of thousands of objects in the S3 bucket so you will need to handle pagination / continuation when listing the bucket. If you want to test this out I think you should able to list s3:///seiscompml07 with any creds - it has a lot of objects in it.

Please use command line args (flags) for

  • bucket name
  • key prefix
  • SQS URL to send the notifications to

Use the regular env var for AWS creds:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • AWS_REGION=ap-southeast-2

Here is a complete notification message for reference (no leading slash in the key).

{  
   "Records":[  
      {  
         "eventVersion":"2.0",
         "eventSource":"aws:s3",
         "awsRegion":"ap-southeast-2",
         "eventTime":"2017-07-18T07:37:23.708Z",
         "eventName":"ObjectCreated:Put",
         "userIdentity":{  
            "principalId":"AWS:AIDAJYYKKSFK62GQJLB4Y"
         },
         "requestParameters":{  
            "sourceIPAddress":"161.65.58.28"
         },
         "responseElements":{  
            "x-amz-request-id":"6CA9E1E7E76A44E9",
            "x-amz-id-2":"OVmhicjfIVCx0mABG5bsTRpOSS3yJSpVVJGm6WNy4QT11sI8mxw1VWBuDN/7mURL/IoW6HLdHKs="
         },
         "s3":{  
            "s3SchemaVersion":"1.0",
            "configurationId":"tf-s3-topic-00e30fd9205671484af8e50bf3",
            "bucket":{  
               "name":"geonet-archive",
               "ownerIdentity":{  
                  "principalId":"A3698EZ2HM37K7"
               },
               "arn":"arn:aws:s3:::geonet-archive"
            },
            "object":{  
               "key":"miniseed/2007/2007.074/URZ.NZ/2007.074.URZ.01-UFC.NZ.D",
               "size":1024,
               "eTag":"30f6ce7e84fd511f1a1fd1d42ff398a9",
               "versionId":"Oru8b1DWpfIb68rCLVZk4bkIj04j2O2h",
               "sequencer":"00596DBAB3A2795B0C"
            }
         }
      }
   ]
}

Handle deletes from S3 for holdings?

If a miniSEED file is added to S3 we will use a bucket put notification to trigger indexing the miniSEED into the holdings.

Currently a delete from S3 will not delete the holdings entry. This can be done manually with a DELETE to /holdings/...key...

Does this need automating? Will there be deletes from S3?

get_station bombs out on response request

Using beta-service,

inventory = client.get_stations(station="ALRZ", level="channel") works fine,
but
inventory = client.get_stations(station="ALRZ", level="response") completely bombs out with problems with paz

Stream naming for errors

need to add the stream defn to errored holdings in fdsn-holdings-consumer

They all get the same zero name at the moment which makes reporting harder.

SeisComp having issue with beta dataselect service responce

Think Seiscomp error relates to how webserver is responding to dataselect service that Seiscomp does not like.

From response Headers to service.

beta-service

byron@byron-Latitude-E7250:~/src/github.com/GeoNet/fdsn/cmd/fdsn-ws$ curl -I "http://beta-service.geonet.org.nz/fdsnws/dataselect/1/query?station=TDHS&starttime=2016-09-01T16:40:00.000&endtime=2016-09-01T16:42:00.000"HTTP/1.1 405 Method Not Allowed
Content-Length: 18
Content-Type: text/plain; charset=utf-8
Date: Wed, 28 Jun 2017 01:39:18 GMT
Server: nginx/1.10.2
Surrogate-Control: max-age=86400
Connection: keep-alive

service

byron@byron-Latitude-E7250:~/src/github.com/GeoNet/fdsn/cmd/fdsn-ws$ curl -I "http://service.geonet.org.nz/fdsnws/dataselect/1/query?station=TDHS&starttime=2016-09-01T16:40:00.000&endtime=2016-09-01T16:42:00.000"
HTTP/1.1 200 OK
Date: Wed, 28 Jun 2017 01:41:59 GMT
Content-Type: application/vnd.fdsn.mseed
Content-Disposition: attachment; filename=fdsnws.mseed
Server: SeisComP3-FDSNWS/1.1.0

This is error from Seiscomp

11:44:02 [debug/FDSNWSConnection] (/home/sysop/gitlocal/bmp/jakarta-2017.124-release/seiscomp3/src/trunk/libs/seiscomp3/io/recordstream/fdsnws.cpp:233) [01] Content-Type: application/vnd.fdsn.mseed
11:44:02 [debug/FDSNWSConnection] (/home/sysop/gitlocal/bmp/jakarta-2017.124-release/seiscomp3/src/trunk/libs/seiscomp3/io/recordstream/fdsnws.cpp:233) [02] Date: Tue, 27 Jun 2017 23:44:02 GMT
11:44:02 [debug/FDSNWSConnection] (/home/sysop/gitlocal/bmp/jakarta-2017.124-release/seiscomp3/src/trunk/libs/seiscomp3/io/recordstream/fdsnws.cpp:233) [03] Server: nginx/1.10.2
11:44:02 [debug/FDSNWSConnection] (/home/sysop/gitlocal/bmp/jakarta-2017.124-release/seiscomp3/src/trunk/libs/seiscomp3/io/recordstream/fdsnws.cpp:233) [04] transfer-encoding: chunked
11:44:02 [debug/FDSNWSConnection] (/home/sysop/gitlocal/bmp/jakarta-2017.124-release/seiscomp3/src/trunk/libs/seiscomp3/io/recordstream/fdsnws.cpp:233) [05] Connection: keep-alive
11:44:02 [debug/FDSNWSConnection] (/home/sysop/gitlocal/bmp/jakarta-2017.124-release/seiscomp3/src/trunk/libs/seiscomp3/io/recordstream/fdsnws.cpp:274) Content length is 0, nothing to read
11:44:02 [debug/RecordInput] (/home/sysop/gitlocal/bmp/jakarta-2017.124-release/seiscomp3/src/trunk/libs/seiscomp3/io/recordinput.cpp:211) RecordStream's end reached

Validate output of station FDSN

As a developer
I want to check that the webservice is returning the correct output
So I know I am on the right track

Acceptance criteria:

  • Provide examples of FDSN station queries and expected output

Testing strategy

Need to look at the testing strategy so that we can run the tests on PR.

update mSEED lib

Update the mSEED lib and remove string trimming from this code.

Keep 7+ days of data for nrt

The archive is 7 days behind. Keep more data in the nrt db to overlap this.
Need to check the DB disk size.
May want more ram on eb instances?

Test the FDSN event service

As the product owner
I would like to have a select set of users test the complete FDSN event service before it goes to service.geonet.org.nz
So that I can be confident that it will not greatly impact the delivery of existing FDSN services

Acceptance Criteria:

  • there is a beta-service.geonet.org.nz set up
  • it provides access to the new event service

remove acceptance criteria: "it provides access to the existing station and dataselect services" since new services were provided

remove format option from event service wadl

The wadl still has the format option available. Since the only output format that we are supporting is xml it makes sense to remove that option from the wadl for the event service and return an appropriate error if someone tries to use that option.

Please:

  • remove format option from the event service wadl
  • return an error if someone uses the format option

Add more optional parameters to the event service

This is probably not MVP but I can image some users will want this feature added:

As a seismologist, I want to search for all earthquakes within a defined distance range from an station, so I know what earthquakes to request data for.

  • Add area circle search using optional parameters: latitude, longitude, minradius, maxradius to fdsn event service

Definition of gap needed

Counting gaps in a miniSEED should be pretty straight forwards. However, I need a definition of what to count i.e., what is a gap?

Can discuss further.

FDSN beta event service not wrapping around 180 longitude

Here is an issue raised in GeoNet/help. See GeoNet/help#40
Hi all, just started playing with the beta fdsn service and ran into unexpected returned values when querying entries around 180 longitude.
The url:
http://beta-service.geonet.org.nz/fdsnws/event/1/query?minlatitude=-49.0&minlongitude=175.0&endtime=2016-09-05T00%3A00%3A00.000000&maxlatitude=-35.0&starttime=2016-09-04T00%3A00%3A00.000000&minmagnitude=4.0&maxlongitude=180.0
returns a nice xml file, and changing maxlongitude to larger values raises an error (as expected), but, changing to this url:
http://beta-service.geonet.org.nz/fdsnws/event/1/query?minlatitude=-49.0&minlongitude=175.0&endtime=2016-09-05T00%3A00%3A00.000000&maxlatitude=-35.0&starttime=2016-09-04T00%3A00%3A00.000000&minmagnitude=4.0&maxlongitude=-175.0
returns no data. Other services (e.g. IRIS) wrap the selection rectangle around the globe here, e.g. the following url returns a valid xml:
http://service.iris.edu/fdsnws/event/1/query?minlatitude=-49.0&minlongitude=175.0&endtime=2016-09-05T00%3A00%3A00.000000&maxlatitude=-35.0&starttime=2016-09-04T00%3A00%3A00.000000&minmagnitude=4.0&maxlongitude=-175.0

This is likely an edge case, but probably worth a fix to work like other FDSN webservices @nbalfour ?

Add service status endpoints for http probes

Add service level monitoring for overall service functionality. These should be suitable for use with http probes (return a 200 or 500):

  • for fdsn (archive) make sure the number of samples in the archive in the last 8 days is > 0. This allows for the 7 day lag on putting files in the archive.
  • for fdsn-nrt make sure there are miniSEED records in the db for the last hour.

optional format parameter for station service

Hi Howard,

obspy (a common FDSN client) always adds &format=xml to the station query URL e.g.,

 curl "http://beta-service.geonet.org.nz/fdsnws/station/1/query?location=20&station=A*&format=xml"

The format parameter is optional to implement for the spec and the default format is xml (so your implementation is correct). Please could you allow format=xml as an optional parameter and return a 401 for format=text? This will let obspy work (then Nat can do more testing and a demo).

After that is fixed Nat might want the text output implementing as well?

Thanks,
Geoff

Incorrect "other" event type in QuakeML

The correct QuakeML event type for "other" is "other event", the XSLT is inserting "other".

<xs:enumeration value="other event"/>
  • fix the XSLT here
  • bug report the upstream XSLT source
  • deploy haz-quake-consumer and
  • update the errored QuakeML in the DB.

Improve fdsn-ws-nrt deployment

  • sla and tier
  • monitoring
  • runbook
  • SEEDLink server
  • check beanstalk app config (correct subnet usage)
  • autoscaling config
  • logging and performance

Code complexity after use of tmp file removed

The dataselect code had to do a lot of work to be performant. Now we use a holdings db and stream the response to the client there is code that can be simplified.

Needs performance testing

quake consumer

Move the quake consumer http enpoints from fdsn-ws into their own consumer application.
Want fdsn-ws to be r/o for scalability and security.

Add more optional parameters to station service

This is probably not MVP but I can image some users will want this feature added:

As a seismologist, I want to search for all stations within a defined distance range from an earthquake, so I know what stations to request data from.

  • Add area circle search using optional parameters: latitude, longitude, minradius, maxradius to fdsn station service

holdings consumer

Move the holdings consumer http enpoints from fdsn-ws into their own consumer application.
Want fdsn-ws to be r/o for scalability and security.

Test reliability of FDSN dataselect service

As a tester
I want to run appropriate tests of the dataselect service
So that I can make sure the service is fulfilling the requirements of our end users

Acceptance criteria:

  • collate problematic fdsn queries to use as tests
  • run queries on the new fsdn data select service
  • queries return meaningful error instead of handing if they fail

includeavailability parameter for station service.

Swarm sends the (optional) includeavailability parameter to the station service. This causes it to error (we don't support it).

It's another one where false is the default anyway (so why send the parameter).

/fdsnws/station/1/query?net=*&level=network&format=xml&includeavailability=false

Need to strip or ignore these optional parameters that we don't support somehow.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.