Giter VIP home page Giter VIP logo

hvsrpy's Issues

Clarity criteria don't respect the frequency filter range

Hi,

It seems that the clarity criteria are not drawing their statistics from between f_range_low and f_range_high.

For example:
Criteria iii): Pass
A0[f0mc]=3.017 is > 2.0

I got this result when using 3 Hz to 5 Hz. You can see in the image below that this amplitude is almost certainly coming from the large peak that I'm trying to avoid.

image

I haven't check the other criteria so it may be that only the amplitude is an issue.

Issue with installing hvsrpy

Hey Joe,

I ma having some issue installing hvsrpy on my Jupiter lab. I run it via anaconda navigator to manage my environments and I am trying to install it on my obspy environment. The traceback/installation error I get when I execute "!pip install hvsrpy" is attached

issue installing hvsrpy.docx

Problem reading SAF v1 format file

Hi,
first af all, thank you for sharing you work with the community.
I am writing this post because I am having problem to read in hvsrpy SAF ASCI format file (v.1).

I have uploaded here an example file (obtained exporting the trace, which has been registered with Tromino (Moho), from the Grilla software).
EqualizedFile-HV01.zip

In the following I have pasted the error I get.
As far as I can understand, it looks like, the 'type' is not recognized correctly; it appears to identify it as type equal to 'peer'

Thank you again
Giuseppe

" .... HV_001part1/test5_001.saf does not include the NORTH_ROT keyword, assuming equal to zero.
warnings.warn(msg, UserWarning)

ValueError Traceback (most recent call last)
Cell In[9], line 1
----> 1 srecords = hvsrpy.read(fnames)
2 srecords = hvsrpy.preprocess(srecords, preprocessing_settings)
3 hvsr = hvsrpy.process(srecords, processing_settings)

File ~\anaconda3\envs\HVSR\Lib\site-packages\hvsrpy\data_wrangler.py:701, in read(fnames, obspy_read_kwargs, degrees_from_north)
698 if len(fname) == 1:
699 fname = fname[0]
--> 701 seismic_recordings.append(read_single(fname,
702 obspy_read_kwargs=read_kwargs,
703 degrees_from_north=degrees_from_north))
705 return seismic_recordings

File ~\anaconda3\envs\HVSR\Lib\site-packages\hvsrpy\data_wrangler.py:619, in read_single(fnames, obspy_read_kwargs, degrees_from_north)
616 logger.info(f"Tried reading as {ftype}, got exception | {e}")
618 if ftype == "peer":
--> 619 raise e
621 pass
622 else:

File ~\anaconda3\envs\HVSR\Lib\site-packages\hvsrpy\data_wrangler.py:612, in read_single(fnames, obspy_read_kwargs, degrees_from_north)
610 for ftype, read_function in READ_FUNCTION_DICT.items():
611 try:
--> 612 srecording_3c = read_function(fnames,
613 obspy_read_kwargs=obspy_read_kwargs,
614 degrees_from_north=degrees_from_north)
615 except Exception as e:
616 logger.info(f"Tried reading as {ftype}, got exception | {e}")

File ~\anaconda3\envs\HVSR\Lib\site-packages\hvsrpy\data_wrangler.py:468, in _read_peer(fnames, obspy_read_kwargs, degrees_from_north)
466 msg = "Must provide 3 peer files (one per trace) as list or tuple, "
467 msg += f"not {type(fnames)}."
--> 468 raise ValueError(msg)
470 component_list = []
471 component_keys = []

ValueError: Must provide 3 peer files (one per trace) as list or tuple, not <class 'str'>. "

(Continuous error): ValueError: illegal value in 4-th argument of internal None

To anyone who reads this,

I recorded ambient noise in a test site, and I am using hvsrpy to estimate f0 and a0. I had 4 sensors per site and the first two worked perfectly. However, when I tried to insert the miniseed of the third station, I always get the error "ValueError: illegal value in 4-th argument of internal None". I am attaching both the miniseed that worked and the miniseed that pops the error. This work is part of my PhD thesis, and I would gladly accept any assistance! The waveforms are here for 30 days: https://easyupload.io/m/r8n6ms

Thank you again,
Angelos Zymvragakis, MSc Geologist - Seismologist

Pre-processing?

Hi Joseph,

Thanks for putting together this amazing package!

I'm doing some initial exploration with a raw data set collected from East Antarctica and encountered some 'interesting' spectra:
hvsrpy_Casey_station70_006_HVSR1
Being a relative beginner with HVSR processing, I wanted to ask what your recommended data pre-processing steps are before HVSR analysis in hvsrpy, beyond the implemented time-domain filtering and spectral smoothing? I understand instrumental response correction may not be necessary if responses across components are similar, but can, e.g., detrending, static shift corrections be implemented in hvsrpy? What are the typical steps you take? I'd be grateful for any advice you could give!

Cheers,

Ian

Combine Three One-Component Miniseed Files to a Single Three-Component File

Problem Summary

Some seismic testing instruments save their measurements into three one-component miniseed files (one for north, east, and vertical). hvsrpy and the associated web-application hvsrweb assume a three-component miniseed file (i.e., all three traces are saved in the same miniseed file) by default. While Sensor3c's .from_mseed classmethod allows for the provision of three one-component files in lieu of a single three-component file (see docs for details), this feature has not yet been added to hsvrweb and remains a frequent point of inquiry.

Proposed Solution

Fortunately, combining three one-component miniseed files into a single three-component miniseed file is quite straightforward using obspy. I have posted code demonstrating how to do this as a gist here. I hope that if you are reading this you find it helpful.

Feature Request: Command line interface

It would be nice to be able to run the program from the command line for a list of files. This is the way that I have implemented by own HVSR calculation software. The following is an example of my command line interface:

usage: H/V Calculator [-h] [--version] [-a] [-c CONFIG] [-m {both,welch,sesame,raw_only}]
                      [-p {auto,manual}]
                      [inputs [inputs ...]]

Calculator for V/H ratios from mini-seed

positional arguments:
  inputs                Input file(s) to read

optional arguments:
  -h, --help            show this help message and exit
  --version             show program's version number and exit
  -a, --all             Compute all usable
  -c CONFIG, --config CONFIG
                        Specify configuration
  -m {both,welch,sesame,raw_only}, --method {both,welch,sesame,raw_only}
                        Method for computing H/V ratio
  -p {auto,manual}, --peakselect {auto,manual}
                        Method for selecting the peak

A few things:

  • I would also add an argument for the output directory, and then a sub-directory would be based on the input file name.
  • In each output directory, standard plots and output files are created.
  • A configuration file can be specified so that an analysis is repeated. The configuration is also saved to the output to document that calculation.

I think such an interface would help in bulk processing of recordings. Additionally, interactively selecting frequency ranges could be done using matplotlib GUI support.

Problems reading .miniseed files

Hello!

My name is David. First of all, thank you very much for this code, it looks really great and very useful!

Well, lets go to the issue. I'm having some problems when I use my own .miniseed files with your codes. For example, i'm struggling with the code named "Gallery of mHVSR Examples Automatically Checked with SESAME (2004) Reliability and Clarity Criteria". When I use my .miniseed file (or even yours), I obtain the same problem message:

ValueError: Must provide 3 peer files (one per trace) as list or tuple, not <class 'str'>

I thought that maybe the problem is because I merged the three components (E, N, Z) into 1 .miniseed file, but if I use it in Geopsy there's no problem, I can see each component and the obtain the HV curve. Even if I try to use the three .miniseed components I have problems.

I have to say that I'm still a beginner playing with python and I believe that I'm missing something.

Could you help me? Thank you in advance! ;)

API Change Request: Multi-Azimuth Output File

API Change Request: Multi-Azimuth Output File

On behalf of Alan Thorp, Ground Investigation Ltd.

Problem

The output file created when considering multiple azimuths (i.e., via a HvsrRotated object) is comma delimited. However, many of the metadata parameters (e.g., LMf0,AZ and SigmaLNf0,AZ) contain commas, making it difficult to parse the files programmatically.

Proposed Solution

Remove the offending commas such that LMf0,AZ and SigmaLNf0,AZ become LMf0AZ and SigmaLNf0AZ, respectively.

Output final results

In the previous version it was possible to save the final results by
hv. to_file (file_name_out, distribution_f0, distribution_mc, data_format= "hvsrpy").
Is there a way to do the same in this new version?
Thank you

Supporting alternate data formats (i.e., not only miniSEED)

Problem Summary

Researchers use various equipment to measure ambient noise for HVSR, and this variety of equipment unfortunately results in a variety of data formats. Ideally hvsrpy will allow for convenient handling of a variety of common data formats and not only miniSEED.

Proposed Solution

As most researchers are able to convert their data to ASCII/UTF-8 characters it makes sense to extend hvsrpy to include that functionality as a first step. However, as the format of any text file may vary, its difficult to produce a single script that will extract the data appropriately. Therefore, it is important to keep in mind that the example provided below is in fact only an example of a potential solution that can/must be modified appropriately.

Examples

For MiniShark:

# Load metadata
with open(fname, "r") as f:
    lines = f.readlines()
for line in lines:
    if line.startswith("#Sample rate (sps):"):
        _, sample_rate = line.split(":\t")
sample_rate = float(sample_rate)
dt = 1/sample_rate

# Load data
keys = ["vt", "ew", "ns"]
df = pd.read_csv(file_name, comment="#", sep="\t", names=keys)
components= {key:sigpropy.TimeSeries(df[key], dt) for key in keys}

# Create Sensor3c object to replace hvsrpy.Sensor3c.from_mseed() 
sensor = hvsrpy.Sensor3c(**components, meta={"File Name": file_name})

For SESAME ASCII data format (SAF) v1

fname = "MT_20211122_133110.SAF"

with open(fname, "r") as f:
    lines = f.readlines()
    
for idx, line in enumerate(lines):
    if line.startswith("SAMP_FREQ = "):
        fs = float(line[len("SAMP_FREQ = "):])
    if line.startswith("####--------"):
        idx += 1
        break

vt = []
ns = []
ew = []
for line in lines[idx:]:
    _vt, _ns, _ew = line.split()
    vt.append(_vt)
    ns.append(_ns)
    ew.append(_ew)

vt = sigpropy.TimeSeries(vt, dt=1/fs)
ns = sigpropy.TimeSeries(ns, dt=1/fs)
ew = sigpropy.TimeSeries(ew, dt=1/fs)
    
sensor = hvsrpy.Sensor3c(ns, ew, vt)

hvsrpy Community Survey

Dear hvsrpy Community,
It is hard to believe but hvsrpy (Vantassel, 2020) has just turned three. Over that time hvsrpy has seen widespread use in the fields of seismology, geophysics, and engineering and a rapidly growing userbase (over 30k downloads & over 45 stars on GitHub). As such, I have decided to undertake the first major overhaul of the hvsrpy codebase to add new features and streamline its API. Throughout this process I am looking for feedback (via this survey https://forms.gle/36aWUKrgGwiYddnSA) to understand how the community currently processes HVSR data and what new features the community would like to see in the future. Your responses are greatly appreciated. Keep an eye out for the release of hvsrpy v2.0.0 later this year.

CPU number for processing

Hi,

First of all, thanks for your efforts for HVSRpy.

I'm currently exploring the use of multiprocessing with HVSRpy.
However, I've noticed that the processing utilizes all available CPU resources extensively.
Is there an option to limit the number of CPUs for a single process, similar to the -j option in geopsy?

Thank you,
segu

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.