Giter VIP home page Giter VIP logo

pyhum's Introduction


May 2020

This is now legacy code, written in python 2.7 and not actively worked on or supported. If you improve the code, please submit a pull request and I will incorporate

I will no longer respond to issues or emails concerning this project, but if you find it useful, I am happy about that


PyHum

pyhum_logo_colour_sm

A Python framework for reading and processing data from a Humminbird low-cost sidescan sonar

Project website here for more details

PyHum is an open-source project dedicated to provide a generic Python framework for reading and exporting data from Humminbird(R) instruments, carrying out rudimentary radiometric corrections to the data, classify bed texture, and produce some maps on aerial photos and kml files for google-earth

  1. read Humminbird DAT and associated SON files
  2. export data
  3. carry out rudimentary radiometric corrections to data, and
  4. classify bed texture using the algorithm detailed in Buscombe, Grams, Smith, (2015) "Automated riverbed sediment classification using low-cost sidescan sonar", Journal of Hydraulic Engineering, 10.1061/(ASCE)HY.1943-7900.0001079, 06015019.
  5. produce some maps on GeoTIFF and kml files

The software is designed to read Humminbird data (.SON, .IDX, and .DAT files) and works on both sidescan and downward-looking echosounder data, where available.

Please cite!

If you use PyHum in your published work, please cite the following papers:

  1. Buscombe, D., Grams, P.E., and Smith, S. (2015) "Automated riverbed sediment classification using low-cost sidescan sonar", Journal of Hydraulic Engineering, 10.1061/(ASCE)HY.1943-7900.0001079, 06015019.

  2. Buscombe, D., 2017, Shallow water benthic imaging and substrate characterization using recreational-grade sidescan-sonar. ENVIRONMENTAL MODELLING & SOFTWARE 89, 1-18.

Contributing & Credits

Primary Developer Daniel Buscombe
     |  Northern Arizona University
      | Flagstaff, AZ 86011
      | [email protected]
Co-Developer Daniel Hamill
     |  Department of Watershed Sciences
      | Utah State University
      | Logan, UT 84322
      | [email protected]

Version: 1.4.5 | Revision: Jan, 2018

Thanks to the following individuals for debugging, supplying data and thoughtful suggestions:

Adam Kaeser
Cam Bodine
Amberle Jones
Phil Whittle
Jacob Berninger
Doug Newcomb
Jereme Gaeta
Ian Nesbit
Allen Aven
Liam Zarri
Daphne Tuzlak
Rick Debbout
Vikram Umnithan
Matt Marineau
Kathryn Ford

Please Read

  1. This software has been tested with Python 2.7 on Linux Fedora 16 & 20, Ubuntu 12 -- 17, Windows 7. Python 3 is not yet supported

  2. This software has (so far) been used only with Humminbird 700, 800, 900, 1100, HELIX, MEGA and ONIX series instruments.

  3. PyHum is not yet tested with SOLIX, ICE, ION, PMAX systems. Please make example data available and we'll see what we can do!

Contents

The programs in this package are as follows:

  1. read script to read Humminbird DAT and associated SON files, export data, and produce some rudimentary plots

  2. correct script to read Humminbird data (output from 'read') and perform some radiometric corrections and produce some rudimentary plots

  3. rmshadows read output 'correct', and remove dark shadows in scans caused by shallows, shorelines, and attenuation of acoustics with distance

  4. texture script to read radiometrically corrected Humminbird data in MAT format (output from pyhum_correct.py) and perform a textural analysis using the spectral method of Buscombe et al (forthcoming) and produce some rudimentary plots

  5. map script to generate a point cloud (X,Y,sidescan intensity), save it to ascii format file, grid it and make a raster overlay on an aerial image (pulled automatically from the ESRI GIS image server), and a kml file for showing the same thing in google-earth

  6. map_texture script to generate a point cloud (X,Y,texture lengthscale - calculated using pyhum_texture), save it to ascii format file, grid it and make a raster overlay on an aerial image (pulled automatically from the ESRI GIS image server), and a kml file for showing the same thing in google-earth

  7. e1e2 script to analyse the first (e1, 'roughness') and second (e2, 'hardness') echo returns from the high-frequency downward looking echosounder, and generate generalised acoustic parameters for the purposes of point classification of submerged substrates/vegetation. The processing accounts for the absorption of sound in water, and does a basic k-means cluster of e1 and e2 coefficients into specified number of 'acoustic classes'. This code is based on code by Barb Fagetter ([email protected]). Georeferenced parameters are saved in csv form, and optionally plots and kml files are generated

  8. gui A graphical user interface which essentially serves as a 'wrapper' to the above functions, allowing graphical input of processing options and sequential analysis of the data using PyHum modules

These are all command-line programs which take a number of input (some required, some optional). Please see the individual files for a comprehensive list of input options

Setup

PyHum only works in python 2.X. Python 3 is not yet supported.

Installing in a conda virtual env (recommended)

In a conda (miniconda/anaconda) python 2 environment:

Linux:

conda create --name pyhum python=2
source activate pyhum
conda install gdal
conda install -c conda-forge basemap-data-hires -y
conda install scipy numpy scikit-image
pip install simplekml sklearn pandas dask
pip install joblib toolz cython
conda install -c conda-forge pyresample -y ### or if that fails: pip install pyresample
pip install git+https://github.com/dbuscombe-usgs/PyHum.git --no-deps
python -c"import PyHum;PyHum.dotest()" 
source deactivate pyhum 

Windows:

conda create --name pyhum python=2
activate pyhum
conda install gdal
conda install -c conda-forge basemap-data-hires -y
conda install scipy numpy scikit-image
pip install simplekml sklearn pandas dask
pip install joblib toolz cython
conda install -c conda-forge pyresample -y ### or if that fails: pip install pyresample==1.1.4
conda install numpy==1.12.0
pip install git+https://github.com/dbuscombe-usgs/PyHum.git --no-deps
pip install matplotlib --upgrade

Then run the test, and finally deactivate the venv ::

python -c"import PyHum;PyHum.dotest()" 
deactivate pyhum

If you get gdal/osgeo/ogr/os errors, install GDAL (Windows only)::

  1. Go to: https://www.lfd.uci.edu/~gohlke/pythonlibs/#gdal
  2. Download GDAL‑2.2.3‑cp27‑cp27m‑win_amd64.whl
  3. install using pip:
pip install GDAL‑2.2.3‑cp27‑cp27m‑win_amd64.whl

Installing as a library accessible outside of virtual env

Prerequisite

pip install Cython
  1. From PyPI::
pip install PyHum
  1. the latest 'bleeding edge' (pre-release) version directly from github::
pip install git+https://github.com/dbuscombe-usgs/PyHum.git

(Windows users) install git from here: https://git-scm.com/download/win

  1. from github repo clone::
git clone --depth 1 [email protected]:dbuscombe-usgs/PyHum.git
cd PyHum
python setup.py install

or a local installation:

python setup.py install --user
  1. linux users, using a virtual environment:
virtualenv venv
source venv/bin/activate
pip install numpy
pip install Cython
pip install basemap --allow-external basemap --allow-unverified basemap
pip install PyHum --no-deps
python -c "import PyHum; PyHum.test.dotest()"
deactivate (or source venv/bin/deactivate)

The results will live in "venv/lib/python2.7/site-packages/PyHum". Note for Fedora linux users: you need the geos-devel package for basemap, and the blas and libpack libraries for scipy

Running the test

A test can be carried out by running the supplied script. From the command line (terminal)::

python -c"import PyHum;PyHum.dotest()" 

or (if python3 is your default python)::

python2 -c"import PyHum;PyHum.dotest()" 

Using the GUI

From the command line (terminal)::

python -c "import PyHum; PyHum.gui()"

Using PyHum

Inputs to the program are a .DAT file (e.g. R0089.DAT) and a folder of .SON and .IDX files (e.g. /my/folder/R0089). The program will read the .SON files with or without the accompanying .IDX files, but will be faster if the .IDX files are present.

PyHum is modular so can be called from within a python or ipython console, from an IDE (such as IDLE or Spyder), or by running a script.

The following example script could be saved as, for example "proc_mysidescandata.py" and run from the command line using

python proc_mysidescandata.py -i C:\MyData\R0087.DAT -s C:\MyData\R0087
import sys, getopt

from Tkinter import Tk
from tkFileDialog import askopenfilename, askdirectory

import PyHum
import os

if __name__ == '__main__': 

    argv = sys.argv[1:]
    humfile = ''; sonpath = ''
    
    # parse inputs to variables
    try:
       opts, args = getopt.getopt(argv,"hi:s:")
    except getopt.GetoptError:
         print 'error'
         sys.exit(2)
    for opt, arg in opts:
       if opt == '-h':
         print 'help'
         sys.exit()
       elif opt in ("-i"):
          humfile = arg
       elif opt in ("-s"):
          sonpath = arg

    # prompt user to supply file if no input file given
    if not humfile:
       print 'An input file is required!!!!!!'
       Tk().withdraw() # we don't want a full GUI, so keep the root window from appearing
       humfile = askopenfilename(filetypes=[("DAT files","*.DAT")]) 

    # prompt user to supply directory if no input sonpath is given
    if not sonpath:
       print 'A *.SON directory is required!!!!!!'
       Tk().withdraw() # we don't want a full GUI, so keep the root window from appearing
       sonpath = askdirectory() 

    # print given arguments to screen and convert data type where necessary
    if humfile:
       print 'Input file is %s' % (humfile)

    if sonpath:
       print 'Son files are in %s' % (sonpath)
                 
    doplot = 1 #yes

    # reading specific settings
    cs2cs_args = "epsg:26949" #arizona central state plane
    bedpick = 1 # auto bed pick
    c = 1450 # speed of sound fresh water
    t = 0.108 # length of transducer
    draft = 0.3 # draft in metres
    flip_lr = 1 # flip port and starboard
    model = 998 # humminbird model
    calc_bearing = 0 #no
    filt_bearing = 0 #no
    chunk = 'd100' # distance, 100m
    #chunk = 'p1000' # pings, 1000
    #chunk = 'h10' # heading deviation, 10 deg
     
    # correction specific settings
    maxW = 1000 # rms output wattage
    dofilt = 0 # 1 = apply a phase preserving filter (WARNING!! takes a very long time for large scans)
    correct_withwater = 0 # don't retain water column in radiometric correction (1 = retains water column for radiomatric corrections)
    ph = 7.0 # acidity on the pH scale
    temp = 10.0 # water temperature in degrees Celsius
    salinity = 0.0
    dconcfile = None

    # for shadow removal
    shadowmask = 1 #manual shadow removal

    # for mapping
    res = 99 # grid resolution in metres
    # if res==99, the program will automatically calc res from the spatial res of the scans
    mode = 1 # gridding mode (simple nearest neighbour)
    #mode = 2 # gridding mode (inverse distance weighted nearest neighbour)
    #mode = 3 # gridding mode (gaussian weighted nearest neighbour)
    dowrite = 0 #disable writing of point cloud data to file
    scalemax = 60 # max color scale value (60 is a good place to start)

    ## read data in SON files into PyHum memory mapped format (.dat)
    PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk)

    ## correct scans and remove water column
    PyHum.correct(humfile, sonpath, maxW, doplot, dofilt, correct_withwater, ph, temp, salinity, dconcfile)

    ## remove acoustic shadows (caused by distal acoustic attenuation or sound hitting shallows or shoreline)
    PyHum.rmshadows(humfile, sonpath, win, shadowmask, doplot)
   
    ## Calculate texture lengthscale maps using the method of Buscombe et al. (2015)
    win = 10
    PyHum.texture2(humfile, sonpath, win, doplot, numclasses)

    ## grid and map the scans
    PyHum.map(humfile, sonpath, cs2cs_args, res, mode, nn, numstdevs, use_uncorrected, scalemax) #dowrite, 


or from within ipython (with a GUI prompt to navigate to the files):

   run proc_mysidescandata.py

Models

The following model flags are supported:

'mega'
'helix'
'onix'
998
1198
898
1199
798

Trouble Shooting

  1. Problem: pyhum read hangs for a long time (several minutes) on the test script. Try this: uninstall joblib and install an older version::
   pip uninstall joblib
   pip install joblib==0.7.1
  1. Problem: you get an "invalid mode or file name" error. Try this: construct file paths using raw strings e.g.::
   r'C:\Users\me\mydata\R0089' 

or using os, e.g.::

   import os
   os.path.abspath(os.path.join('C:\Users','me','mydata','R0089'))

If you get C++ compiler errors (such as "Unable to find vcvarsall.bat"), you will need to install the Microsoft Visual C++ compiler from here: http://aka.ms/vcpython27

Support

This is a new project written and maintained by Daniel Buscombe. Bugs are expected - please report them. Please use the 'Issues' tab in github

https://github.com/dbuscombe-usgs/PyHum

Feedback and suggestions for improvements are very welcome

Please download, try, report bugs, fork, modify, evaluate, discuss, collaborate.

Project website here for more details

Thanks for stopping by!

Acknowledgements

This function is part of PyHum software This software is in the public domain because it contains materials that originally came from the United States Geological Survey, an agency of the United States Department of Interior. For more information, see the official USGS copyright policy at

http://www.usgs.gov/visual-id/credit_usgs.html#copyright

Any use of trade, product, or firm names is for descriptive purposes only and does not imply endorsement by the U.S. government.

Thanks to Barb Fagetter ([email protected]) for some format info, Dan Hamill (Utah State University), Paul Anderson (Quest Geophysical Asia) and various others for debugging and suggestions for improvements

pyhum's People

Contributors

danhamill avatar dbuscombe-usgs avatar iannesbitt avatar mbenson182 avatar sbfrf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyhum's Issues

PyHum.map -- running into issues

I'm having problems with some output made by the map function. I'm not sure if it the function itself or is coming from the arguments that I am passing into the previous functions (read,correct,rmshadows,texture). Below are images showing what I am getting back.
map0
groundoverlays

I am using the default arguments for the functions except for these 2:
cs2cs_args = "epsg:26991" #Minnesota North
model = 1199

You can check the script below: (couldn't attach a .py file)
proc_mysidescandata.txt

I'm not sure where I am going wrong. You can get the output that I have made after running this script in the Drive folder shared below. It also contains the DAT and SON files that I am pointing to when running the script.

data_link

I would appreciate any help you could offer. Thanks alot!

~rick

Python crashes when running any module but PyHum.read

The script used is attached below. I have installed all modules and am running the newest PyHum, 1.3.9. Windows 7 with 64 bit Python and Anaconda package. I get the message that python has crashed (or the iPython kernel if I am using Spyder) when I run any module but PyHum.read. Any advice?

Thanks,
Liam

BuscombeFixR00217.txt

Specifying 1 chunk throws a IO Error

Hello all,

I have ran into a problem when attempting to display PyHum in just 1 chunk.

image

This problem does not occur if I specify more than one chunk IE 'h5' or 'd100'. PyHum is also running very slowly when stating "27027 windows to process". I noticed that CPU usage goes to 100% when processing windows-do I need more CPU to avoid this error?

Thanks,
Liam

filter footprint array has incorrect shape

I've got a couple of recordings that I get the following error on:

Traceback (most recent call last):
File "processSideScan.py", line 54, in
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model=model, chunk=chunk)
File "/home/rick/anaconda2/envs/sss/lib/python2.7/site-packages/PyHum/_pyhum_read.py", line 665, in read
x, bed = humutils.auto_bedpick(ft, dep_m, chunkmode, port_fp, c)
File "/home/rick/anaconda2/envs/sss/lib/python2.7/site-packages/PyHum/utils.py", line 92, in auto_bedpick
imu = median_filter(imu,(np.shape(imu)[0]/100,np.shape(imu)[1]/100))
File "/home/rick/anaconda2/envs/sss/lib/python2.7/site-packages/scipy/ndimage/filters.py", line 1069, in median_filter
origin, 'median')
File "/home/rick/anaconda2/envs/sss/lib/python2.7/site-packages/scipy/ndimage/filters.py", line 989, in _rank_filter
raise RuntimeError('filter footprint array has incorrect shape.')
RuntimeError: filter footprint array has incorrect shape.

We recorded other tracks with the same machine and they processed without this error, any clues?
Here is a link to the data, thanks!

tag a release

Please tag a release so we can build a conda package with a version on binstar.org

"Something went wrong with the parallelised version of pyread ... "

Please help with my most recent issue,

I just collected some humminbird sonar data and am trying to process it. The following error message pops up:

image

I ran 'pip install PyHum --upgrade' to attempt to fix the issue, to no avail.

Attached is the .py file, .bat file to run the .py file.

Thank you!
Liam

Value Error PyHum.map()

I'm getting an error in PyHum.map() and not sure why

res = 99
dowrite = 0

Traceback (most recent call last): File "C:\Users\Rdebbout\temp\SideScanSonar\processSideScan.py", line 60, in <m odule> PyHum.map(humfile, sonpath, cs2cs_args, res, dowrite) File "C:\Users\Rdebbout\AppData\Local\Continuum\Anaconda2\envs\sss\lib\site-pa ckages\PyHum\_pyhum_map.py", line 282, in map res = make_map(esi[shape_port[-1]*p:shape_port[-1]*(p+1)], nsi[shape_port[-1 ]*p:shape_port[-1]*(p+1)], theta[shape_port[-1]*p:shape_port[-1]*(p+1)], dist_tv g[shape_port[-1]*p:shape_port[-1]*(p+1)], port_fp[p], star_fp[p], R_fp[p], meta[ 'pix_m'], res, cs2cs_args, sonpath, p, dowrite, mode, nn, numstdevs, meta['c'], np.arcsin(meta['c']/(1000*meta['t']*meta['f']))) #dogrid, influence File "C:\Users\Rdebbout\AppData\Local\Continuum\Anaconda2\envs\sss\lib\site-pa ckages\PyHum\_pyhum_map.py", line 386, in make_map write.txtwrite( outfile, np.hstack((humutils.ascol(X.flatten()),humutils.asc ol(Y.flatten()), humutils.ascol(merge.flatten()), humutils.ascol(D.flatten()), h umutils.ascol(R.flatten()), humutils.ascol(h.flatten()), humutils.ascol(t.flatte n()) )) ) File "PyHum\write.pyx", line 10, in PyHum.write.txtwrite (PyHum\write.c:1428) ValueError: Buffer dtype mismatch, expected 'float64_t' but got 'float'

PyHum problem Windows10

Hello Daniel,

Thank you for provide PyHum, it is a very interesting toolbox. I'm working with hydrodynamic modelling in estuaries and a I think that PyHum can be very useful to characterize near bottom morphodynamics.

I installed PyHum under windows10, python2.7 using pip install pyhum command.
The program test runs fine using the command python -c "import PyHum; PyHum.test.dotest()"
All test results look like good.

However, when I try to process my sonar data from a 999 ci hd si unit I'm getting this error message

WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
Traceback (most recent call last):
File "E:\PROJETO_MODELAGEM_JOANES\BATIMETRIA_JOANES\sidescan\pyhum_novo.py", line 83, in
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk)
File "C:\opentelemac-mascaret\python27\lib\site-packages\PyHum_pyhum_read.py", line 404, in read
data = pyread.pyread(sonfiles, humfile, c, model, cs2cs_args)
TypeError: Argument 'humfile' has incorrect type (expected str, got unicode)

After this a modify the line 404 of _pyhum_read.py as bellow putting str(humfile) to get a string argument
data = pyread.pyread(sonfiles, str(humfile), c, model, cs2cs_args)

However, I got this new error

WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
Traceback (most recent call last):
File "E:\PROJETO_MODELAGEM_JOANES\BATIMETRIA_JOANES\sidescan\pyhum_novo.py", line 83, in
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk)
File "C:\opentelemac-mascaret\python27\lib\site-packages\PyHum_pyhum_read.py", line 408, in read
metadat = data.getmetadata()
File "PyHum\pyread.pyx", line 494, in PyHum.pyread.pyread.getmetadata (PyHum\pyread.c:11589)
File "PyHum\pyread.pyx", line 500, in PyHum.pyread.pyread.getmetadata (PyHum\pyread.c:10281)
TypeError: 'NoneType' object is not subscriptable

I attached my sonar data for your checking.

I would really appreciate any help to solve this problem.

With best regards,

Taoan
sidescan.zip

Why does PyHum.map_texture give many raster ouputs?

I get many outputs for:
sonpath+’class_GroundOverlay’+str(p)+’.kml’: kml file
(contains gridded (or point cloud) texture lengthscale map for importing into google earth of the pth chunk)

which is the output I am most interested in. For example:

image

Why is this, and which is the best to use? Should I use all of them?

Thank you,
Liam

Memory issue

Hallo Daniel,
thanks for your nice toolbox. Seem's like setting model=998 it also works for a Hummingbird Helix 5 SI.
However, I got some problems when trying to process the data:

  1. Trying to process it as a whole, it get this error:
    philipp@azurit:~/Humminbird-Sonar/Aufnahmen/Heidkate/Hk001$ python Heidkate01.py
    Input file is R00011.DAT
    Son files are in .
    cs2cs arguments are epsg:3857
    Draft: 0.3
    Celerity of sound: 1495.0 m/s
    Transducer length is 0.108 m
    Bed picking is auto
    Only 1 chunk will be produced
    Data is from the 998 series
    Bearing will be calculated from coordinates
    Bearing will be filtered
    Checking the epsg code you have chosen for compatibility with Basemap ...
    ... epsg code compatible
    WARNING: Because files have to be read in byte by byte,
    this could take a very long time ...
    low-frq. downward scan not available
    port and starboard scans are different sizes ... rectifying
    Traceback (most recent call last):
    File "Heidkate01.py", line 140, in
    PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
    File "/usr/local/lib/python2.7/dist-packages/PyHum/_pyhum_read.py", line 503, in read
    if np.shape(port_fp[0])[1] > np.shape(star_fp[0])[1]:
    IndexError: tuple index out of range

  2. When I use the chunk method, chunk=d100, Pyhum.read and Pyhum.correct work properly, however PyHum.map isn't able to process all chunks. After about 10 to 12 chunks, the memory overruns, and I get the following message:
    getting point cloud ...
    error on chunk 11
    When I change the corresponding for-loop in _pyhum_map.py on line 295 to:
    for p in range(10,len(star_fp)):
    I can process the next ~ 10 chunks, until the mempory is full again, so it's not an data issue.

Thanks for help
Philipp

version 1.3.7 PyHum.read() pyread.pyx module returning NoneType

I have installed the most recent version 1.3.7 and installed using

pip install -e 'path/to/repo'

It installs and builds with no errors and I can import it, but when using the PyHum.read() function I am getting the following error:

Traceback (most recent call last):
File "", line 1, in
File "PyHum_pyhum_read.py", line 396, in read
metadat = data.getmetadata()
File "pyread.pyx", line 479, in PyHum.pyread.pyread.getmetadata (PyHum\pyread.c:9841)
File "pyread.pyx", line 485, in PyHum.pyread.pyread.getmetadata (PyHum\pyread.c:8879)
TypeError: 'NoneType' object is not subscriptable

if I step through the _pyhum_read.py script, I am unable to:

import pyread

All of the files that we have collected with our Hummingbird model 1198 have only 3 files associated with them, running PyHum.read() with model = 1198 doesn’t seem to work, but if I switch it to 1199, it does and produces good plots up until I get to the PyHum.map() function which is where the problem exists that I mentioned in the github issue.

I have since tried to install PyHum onto a personal computer running Ubuntu 15.10 . Using the package installed through ‘pip install PyHum’ I was able to get to the map function, with PyHum version 1.3.7 installed I am getting the error which I have attached as a .txt (‘PyHum.read_output’) if I use model = 1199. Below is the error I get when I use model = 1198 in version 1.3.7 running PyHum.read():

Input file is C:\Users\Rdebbout\SideScanData\TEST\R00015.DAT
Son files are in C:\Users\Rdebbout\SideScanData\TEST\R00015
cs2cs arguments are epsg:26791
Draft: 0.3
Celerity of sound: 1450.0 m/s
Transducer length is 0.108 m
Bed picking is auto
Chunks based on distance of 100 m
Data is from the 1198 series
Heading based on course-over-ground
Bearing will be calculated from coordinates
Checking the epsg code you have chosen for compatibility with Basemap ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
Traceback (most recent call last):
File "", line 1, in
File "PyHum_pyhum_read.py", line 396, in read
metadat = data.getmetadata()
File "pyread.pyx", line 479, in PyHum.pyread.pyread.getmetadata (PyHum\pyread.c:9841)
File "pyread.pyx", line 485, in PyHum.pyread.pyread.getmetadata (PyHum\pyread.c:8879)
TypeError: 'NoneType' object is not subscriptable

The method that I used to install PyHum after cloning the git repository is as such, pip install -e D:\Projects\SideScanSonar\PyHum. If I try to use the ‘python setup.py install’ method I get the output in the txt file attached ‘setup.py_Install_Ubuntu.txt’. I can’t seem to use the setup.py method in either windows or Ubuntu. In windows, I get the error:

D:\Projects\SideScanSonar\PyHum>python setup.py install
C:\Users\Rdebbout\AppData\Local\Continuum\Anaconda\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'install_req
uires'
warnings.warn(msg)
running install
running build
running build_py
running build_ext
skipping 'PyHum\cwt.c' Cython extension (up-to-date)
building 'PyHum.cwt' extension
error: Unable to find vcvarsall.bat

            I have installed Microsoft Visual C++ Compiler for Python 2.7 and have added the directory to the vcvarsall.bat file to my environment variables, I have also tried to add a VS110COMNTOOLS variable with the same directory listed, no luck. I don’t expect you to know how to fix this, I just thought it might be worth mentioning because it seems to be a common problem with compiling packages in the Windows environment, one which I can’t figure out. Looking at the PKG-INFO on github it seems like you are using a Linux OS, so I thought that doing it on Ubuntu might work out better, but didn’t.

PyHum.read_output.txt
setup.py_Install_Ubuntu.txt

Thanks for all of your help!

Issue in Windows 10 with pykdtree

danhamill, this is the Windows issue. I have started another issue so the Linux and Windows install issues dont get confused. You can see from the error output below that install of pyresample fails at: "pykdtree/kdtree.c(354) : fatal error C1083: Cannot open include file: 'stdint.h': No such file or directory
error: command 'C:\Users\Phillip\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\cl.exe' failed with exit status
2". On Googling this issue it seems a problem with the C++ compiler not fully supporting the C99 standard and having stdint.h. I have just installed the VS 2017 C++ tools in the hope this would work (apparently the stdint.h code was added back in 2013?). No luck as the below shows.

Output from the Anaconda2 Prompt:

(C:\Users\Phillip\Anaconda2) C:\Users\Phillip>pip install pyhum
Requirement already satisfied: pyhum in c:\users\phillip\appdata\roaming\python\python27\site-packages

(C:\Users\Phillip\Anaconda2) C:\Users\Phillip>python -c "import PyHum; PyHum.test.dotest()"
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\Phillip\AppData\Roaming\Python\Python27\site-packages\PyHum_init_.py", line 67, in
from PyHum._pyhum_map import map
File "C:\Users\Phillip\AppData\Roaming\Python\Python27\site-packages\PyHum_pyhum_map.py", line 79, in
import pyresample
ImportError: No module named pyresample

(C:\Users\Phillip\Anaconda2) C:\Users\Phillip>pip install pyresample
Collecting pyresample
Requirement already satisfied: numpy in c:\users\phillip\anaconda2\lib\site-packages (from pyresample)
Collecting pykdtree>=1.1.1 (from pyresample)
Using cached pykdtree-1.2.1.tar.gz
In the tar file c:\users\phillip\appdata\local\temp\pip-mvyhtq-unpack\pykdtree-1.2.1.tar.gz the member pykdtree-1.2.1/README is invalid: unable to resolve link inside archive
Requirement already satisfied: setuptools>=3.2 in c:\users\phillip\anaconda2\lib\site-packages (from pyresample)
Requirement already satisfied: configobj in c:\users\phillip\anaconda2\lib\site-packages (from pyresample)
Requirement already satisfied: pyproj in c:\users\phillip\anaconda2\lib\site-packages (from pyresample)
Requirement already satisfied: pyyaml in c:\users\phillip\anaconda2\lib\site-packages (from pyresample)
Requirement already satisfied: six>=1.6.0 in c:\users\phillip\anaconda2\lib\site-packages (from setuptools>=3.2->pyresample)
Requirement already satisfied: packaging>=16.8 in c:\users\phillip\anaconda2\lib\site-packages (from setuptools>=3.2->pyresample)
Requirement already satisfied: appdirs>=1.4.0 in c:\users\phillip\anaconda2\lib\site-packages (from setuptools>=3.2->pyresample)
Requirement already satisfied: pyparsing in c:\users\phillip\anaconda2\lib\site-packages (from packaging>=16.8->setuptools>=3.2->pyresample)
Building wheels for collected packages: pykdtree
Running setup.py bdist_wheel for pykdtree ... error
Complete output from command C:\Users\Phillip\Anaconda2\python.exe -u -c "import setuptools, tokenize;file='c:\users\phillip\appdata\local\temp\pip-build-b3ewnc\pykdtree\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" bdist_wheel -d c:\users\phillip\appdata\local\temp\tmpqfgpxzpip-wheel- --python-tag cp27:
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win32-2.7
creating build\lib.win32-2.7\pykdtree
copying pykdtree\test_tree.py -> build\lib.win32-2.7\pykdtree
copying pykdtree_init_.py -> build\lib.win32-2.7\pykdtree
running build_ext
building 'pykdtree.kdtree' extension
creating build\temp.win32-2.7
creating build\temp.win32-2.7\Release
creating build\temp.win32-2.7\Release\pykdtree
C:\Users\Phillip\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -IC:\Users\Phillip\Anaconda2\include -IC:\Users\Phillip\Anaconda2\PC -IC:\Users\Phillip\Anaconda2\lib\site-packages\numpy\core\include /Tcpykdtree/kdtree.c /Fobuild\temp.win32-2.7\Release\pykdtree/kdtree.obj /Ox /openmp
kdtree.c
c:\users\phillip\anaconda2\lib\site-packages\numpy\core\include\numpy\npy_1_7_deprecated_api.h(12) : Warning Msg: Using deprecated NumPy API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
pykdtree/kdtree.c(354) : fatal error C1083: Cannot open include file: 'stdint.h': No such file or directory
error: command 'C:\Users\Phillip\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\cl.exe' failed with exit status 2


Failed building wheel for pykdtree
Running setup.py clean for pykdtree
Failed to build pykdtree
Installing collected packages: pykdtree, pyresample
Running setup.py install for pykdtree ... error
Complete output from command C:\Users\Phillip\Anaconda2\python.exe -u -c "import setuptools, tokenize;file='c:\users\phillip\appdata\local\temp\pip-build-b3ewnc\pykdtree\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record c:\users\phillip\appdata\local\temp\pip-a4bahi-record\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build\lib.win32-2.7
creating build\lib.win32-2.7\pykdtree
copying pykdtree\test_tree.py -> build\lib.win32-2.7\pykdtree
copying pykdtree_init_.py -> build\lib.win32-2.7\pykdtree
running build_ext
building 'pykdtree.kdtree' extension
creating build\temp.win32-2.7
creating build\temp.win32-2.7\Release
creating build\temp.win32-2.7\Release\pykdtree
C:\Users\Phillip\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -IC:\Users\Phillip\Anaconda2\include -IC:\Users\Phillip\Anaconda2\PC -IC:\Users\Phillip\Anaconda2\lib\site-packages\numpy\core\include /Tcpykdtree/kdtree.c /Fobuild\temp.win32-2.7\Release\pykdtree/kdtree.obj /Ox /openmp
kdtree.c
c:\users\phillip\anaconda2\lib\site-packages\numpy\core\include\numpy\npy_1_7_deprecated_api.h(12) : Warning Msg: Using deprecated NumPy API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
pykdtree/kdtree.c(354) : fatal error C1083: Cannot open include file: 'stdint.h': No such file or directory
error: command 'C:\Users\Phillip\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\cl.exe' failed with exit status 2

----------------------------------------

Command "C:\Users\Phillip\Anaconda2\python.exe -u -c "import setuptools, tokenize;file='c:\users\phillip\appdata\local\temp\pip-build-b3ewnc\pykdtree\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record c:\users\phillip\appdata\local\temp\pip-a4bahi-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in c:\users\phillip\appdata\local\temp\pip-build-b3ewnc\pykdtree\

(C:\Users\Phillip\Anaconda2) C:\Users\Phillip>

PyHum.correct = "global name 'jv' is not defined

Hello,

I am receiving the following error:

globalnamenotdefinederror

Here is my working script, which I run with a batch file. I went into PyHum.correct and the only occurrence of "jv" is on line 753, with no local or global definition. This line is contained within the function "c_scans_lambertian" which is only called in the above function, "correct_scans_lambertian". The "correct_scans_lambertian" function is called in a few places, one of which being line 366 inside of a variable "Zt" which is referenced above under an "if" statement on line 352 having to do with the "correct_withwater" variable that is a user-defined variable. I changed the variable from "0" to "1" but it did not fix the problem. As such, I am out of solutions...help! 😄

import sys, getopt

from Tkinter import Tk

from tkFileDialog import askopenfilename, askdirectory

import PyHum
import os

if name == 'main':

argv = sys.argv[1:]
humfile = ''; sonpath = ''

# parse inputs to variables
#This part is necessary to turn raw_input into arguments
try:
   opts, args = getopt.getopt(argv,"hi:s:")
except getopt.GetoptError:
     print 'error'
     sys.exit(2)
for opt, arg in opts:
   if opt == '-h':
     print 'help'
     sys.exit()
   elif opt in ("-i"):
      humfile = arg
   elif opt in ("-s"):
      sonpath = arg

# prompt user to supply file if no input file given

if not humfile:

print 'An input file is required!!!!!!'

Tk().withdraw() # we don't want a full GUI, so keep the root window from appearing

humfile = askopenfilename(filetypes=[("DAT files","*.DAT")])

# prompt user to supply directory if no input sonpath is given

if not sonpath:

print 'A *.SON directory is required!!!!!!'

Tk().withdraw() # we don't want a full GUI, so keep the root window from appearing

sonpath = askdirectory()

# print given arguments to screen and convert data type where necessary
#

if humfile:

print 'Input file is %s' % (humfile)

if sonpath:

print 'Son files are in %s' % (sonpath)

# general settings
#All readings set to default, except for ESPG code, model & chunk
doplot = 0 #1 yes , 0 no. no need to plot it just takes up space

# reading specific settings
cs2cs_args = "epsg:26941" #NorCal
bedpick = 1 # auto bed pick
c = 1450 # speed of sound fresh water
t = 0.108 # length of transducer
draft = 0.3 # draft in metres
flip_lr = 0 # flip port and starboard
model = 898 #humminbird model
cog = 1 # GPS course-over-ground used for heading
calc_bearing = 0 #no
filt_bearing = 0 #no
chunk = '1'
#chunk = 'd65' # distance, 65m
#chunk = 'p1000' # pings, 1000
#chunk = 'h10' # heading deviation, 10 deg

# correction specific settings
maxW = 1000 # rms output wattage
dofilt = 0 # 1 = apply a phase preserving filter (WARNING!! takes a very long time for large scans)
correct_withwater = 0 # don't retain water column in radiometric correction (1 = retains water column for radiomatric corrections)
ph = 7.0 # acidity on the pH scale
temp = 10.0 # water temperature in degrees Celsius
salinity = 0.0

# for shadow removal
shadowmask = 0 #automatic shadow removal

# for texture calcs
win = 100 # pixel window
shift = 10 # pixel shift
density = win/2 
numclasses = 4 # number of discrete classes for contouring and k-means
maxscale = 20 # Max scale as inverse fraction of data length (for wavelet analysis)
notes = 100 # Notes per octave (for wavelet analysis)

# for mapping
res = 0.5 # grid resolution in metres
# if res==99, the program will automatically calc res from the spatial res of the scans
#mode = 1 # gridding mode (simple nearest neighbour)
#mode = 2 # gridding mode (inverse distance weighted nearest neighbour)
mode = 3 # gridding mode (gaussian weighted nearest neighbour)
dowrite = 1 #If 1, ASCII point cloud will be written to file. May not need this if insert ArcPy

nn = 64 #number of nearest neighbours for gridding (used if mode > 1)
influence = 1 #Radius of influence used in gridding. Cut off distance in meters 
numstdevs = 4 #Threshold number of standard deviations in sidescan intensity per grid cell up to which to accept 

# for downward-looking echosounder echogram (e1-e2) analysis
beam = 20.0 # Width of down-imaging beam
transfreq = 200.0 # frequency (kHz) of downward looking echosounder
integ = 5 # number of pings over which to ingegrate
numclusters = 3 # number of acoustic classes to group observations

## read data in SON files into PyHum memory mapped format (.dat)
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk)

## correct scans and remove water column
PyHum.correct(humfile, sonpath, maxW, doplot, dofilt, correct_withwater, ph, temp, salinity)

## remove acoustic shadows (caused by distal acoustic attenuation or sound hitting shallows or shoreline)
PyHum.rmshadows(humfile, sonpath, win, shadowmask, doplot)

## Calculate texture lengthscale maps using the method of Buscombe et al. (2015)   
#PyHum.texture(humfile, sonpath, win, shift, doplot, density, numclasses, maxscale, notes)

## grid and map the scans
PyHum.map(humfile, sonpath, cs2cs_args, res, dowrite, mode, nn, numstdevs)

## grid and map the texture lengthscale maps
#PyHum.map_texture(humfile, sonpath, cs2cs_args, res, mode, nn, numstdevs)

#calculate and map the e1 and e2 acoustic coefficients from the downward-looking sonar
#PyHum.e1e2(humfile, sonpath, cs2cs_args, ph, temp, salinity, beam, transfreq, integ, numclusters, doplot)

Thank you!
Liam

Image correction

Thanks to Dan I was able to create an image with just the water column of a particular humfile like the image below:
merge_scan_withwater

Now, what I am having issues with is applying a correction to the image to compensate for acoustic attenuation. I've determined the intensity dissipation rate as it relates to depth and would like to apply that equation paired with the depth meta data. I'm unsure how to go about it in python/pyhum.

Any feedback would be greatly appreciated, Cheers,
mike

DLL load failed: The specified module could not be found

Hello again,
Was trying to install this package on one of the engineers computer who sent me on figuring out this package to begin with. We went through your latest install anaconda prompt commands and had no issues whatsoever! HOORAY :)
However, when we went to do the python -c"import PyHum; PyHum.dotest()" we ran into a import gui error and a DLL load failed error

image

Any ideas on what could have caused this?

pykdtree available only for Linux?

Hi folks,

I get the notification "install pykdtree for faster kd-tree operations: https://github.com/storpipfugl/pykdtree" when running any PyHum script.

image

When attempting to install this library using:

python setup.py install (inside of pykdtree download folder)

I get the error message "TypeError: unsupported operand type(s) for %: 'tuple' and 'str'"

image

When looking at the github site, it appears that this library may only be available for Linux. Is this the case?

Issue in Linux with test

Hi Daniel, I'm getting the following error trying the test. I will confess I am an absolute newbie with Python and Linux. I tried to get it working in Windows 10 but had a series of errors which I overcame one by one but the issue of "stdint.h" not being found made me look to an older Linux computer we have sitting in the office. Got that working OK with an initial error due to basemap (reinstalled which corrected version issues I think). We are using the latest Humminbird Helix Mega imaging system (1.2Mhz) and would like to try your texture classification method and possible the biomass function.

hydrobiology@hydrobiology-Satellite-A660 ~ $ python -c "import PyHum; PyHum.test.dotest()"
Traceback (most recent call last):
File "", line 1, in
File "/home/hydrobiology/anaconda2/lib/python2.7/site-packages/PyHum/init.py", line 60, in
from PyHum._pyhum_read import read
File "/home/hydrobiology/anaconda2/lib/python2.7/site-packages/PyHum/_pyhum_read.py", line 61, in
from scipy.io import savemat
File "/home/hydrobiology/anaconda2/lib/python2.7/site-packages/scipy/io/init.py", line 85, in
from .matlab import loadmat, savemat, whosmat, byteordercodes
File "/home/hydrobiology/anaconda2/lib/python2.7/site-packages/scipy/io/matlab/init.py", line 13, in
from .mio import loadmat, savemat, whosmat
File "/home/hydrobiology/anaconda2/lib/python2.7/site-packages/scipy/io/matlab/mio.py", line 12, in
from .miobase import get_matfile_version, docfiller
File "/home/hydrobiology/anaconda2/lib/python2.7/site-packages/scipy/io/matlab/miobase.py", line 22, in
from scipy.misc import doccer
File "/home/hydrobiology/anaconda2/lib/python2.7/site-packages/scipy/misc/init.py", line 49, in
from scipy.special import comb, factorial, factorial2, factorialk
File "/home/hydrobiology/anaconda2/lib/python2.7/site-packages/scipy/special/init.py", line 601, in
from ._ufuncs import *
ImportError: libgfortran.so.1: cannot open shared object file: No such file or directory

_pyhum_map ascii output

Just a quick question: Is the side scan intensity value which is written to the ascii file the raw intensity value from the sonar recording, or is it the corrected intensity value based on your corrections?
Thanks!

mosaic_texture NameError -- 'noisefloor'

Using 1.3.14 I'm getting the following error when running through the mosaic_texture module.

`Input file is C:\Users\Rdebbout\SideScanTEST\R00023.DAT
Sonar file path is C:\Users\Rdebbout\SideScanTEST\R00023
cs2cs arguments are epsg:26991
Gridding resolution: 99.0
Number of nearest neighbours for gridding: 5
Weighting for gridding: 1
creating grids ...
mosaicking ...
Traceback (most recent call last):
File "C:\Users\Rdebbout\SideScanTEST\processSideScan.py", line 67, in

PyHum.mosaic_texture(humfile, sonpath, cs2cs_args)

File "C:\Users\Rdebbout\AppData\Local\Continuum\Anaconda2\envs\sidescan3\lib\s
ite-packages\PyHum_pyhum_mosaic_texture.py", line 357, in mosaic_texture
S[S<noisefloor] = np.nan
NameError: global name 'noisefloor' is not defined`

Output directory location fail

Is there a way to identify a directory where you would like output to be placed? I am working through these modules in this order, like is done in your test script,:

read()
correct()
rmshadows()
texture()
map()
map_texture()
e1e2()

It is putting the created .dat / .mat files in the humfile directory, and the created png files in the sonfile directory, does this seem right? Anyway if you look at the traceback below there seems to be a problem with the handling of these paths.

Traceback (most recent call last): File "C:\Users\Rdebbout\temp\SideScanSonar\processSideScan.py", line 60, in <m odule> PyHum.texture(humfile, sonpath) File "C:\Users\Rdebbout\AppData\Local\Continuum\Anaconda2\envs\sss\lib\site-pa ckages\PyHum\_pyhum_texture.py", line 216, in texture ft = 1/loadmat(sonpath+base+'meta.mat')['pix_m'] File "C:\Users\Rdebbout\AppData\Local\Continuum\Anaconda2\envs\sss\lib\site-pa ckages\scipy\io\matlab\mio.py", line 134, in loadmat MR = mat_reader_factory(file_name, appendmat, **kwargs) File "C:\Users\Rdebbout\AppData\Local\Continuum\Anaconda2\envs\sss\lib\site-pa ckages\scipy\io\matlab\mio.py", line 57, in mat_reader_factory byte_stream = _open_file(file_name, appendmat) File "C:\Users\Rdebbout\AppData\Local\Continuum\Anaconda2\envs\sss\lib\site-pa ckages\scipy\io\matlab\mio.py", line 23, in _open_file return open(file_like, 'rb') IOError: [Errno 22] invalid mode ('rb') or filename: 'C:/Users/Rdebbout/temp/Sid eScanSonar/R00021\\C:/Users/Rdebbout/temp/SideScanSonar/R00021meta.mat'

The meta.mat file has been created in the same directory where the humfile is located:

`humfile = 'C:/Users/Rdebbout/temp/SideScanSonar/R00021.DAT'

C:/Users/Rdebbout/temp/SideScanSonar/R00021meta.mat`

rmshadows win default

I'm looking in the rmshadows script in v. 1.3.7 and am seeing that the derfault for win size in the arguments is getting loaded with the value 31.

def rmshadows(humfile, sonpath, win=31, shadowmask=0, doplot=1):
'''
Remove dark shadows in scans caused by shallows, shorelines, and attenuation of acoustics with distance
Manual or automated processing options available
Works on the radiometrically corrected outputs of the correct module

Syntax
----------
[] = PyHum.rmshadows(humfile, sonpath, win, shadowmask, doplot)

Parameters
----------
humfile : str
   path to the .DAT file
sonpath : str
   path where the *.SON files are
win : int, *optional* [Default=100]
   window size (pixels) for the automated shadow removal algorithm

Would a larger window size remove more shadows or a smaller window size?

Thanks

PyHum.test.dotest() failing... memory mapping failed or something went wrong with parallelised version of pyread...

I finally got PyHum installed, and when trying to test in PyCharm I am writing out the following code
import PyHum
PyHum.test.dotest()

When it runs, I get the error and tracebacks found below:

C:\anaconda2\python.exe C:/PythonCode/PyHum/test.py
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test
'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Input file is C:\Users\ryan.hefley\pyhum_test\test.DATSon files are in C:\Users\ryan.hefley\pyhum_test

cs2cs arguments are epsg:26949
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Draft: 0.3
Celerity of sound: 1450.0 m/sCelerity of sound: 1450.0 m/s

Port and starboard will be flippedPort and starboard will be flipped

Transducer length is 0.108 mTransducer length is 0.108 m

Bed picking is auto
Bed picking is autoOnly 1 chunk will be produced

Only 1 chunk will be producedData is from the 998 series

Bearing will be calculated from coordinatesData is from the 998 series

Bearing will be filtered
Bearing will be calculated from coordinatesChecking the epsg code you have chosen for compatibility with Basemap ...

Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
memory-mapping failed in sliding window - trying memory intensive version
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 472, in read
shape_port = io.set_mmap_data(sonpath, base, '_data_port.dat', 'int16', Zt)
File "C:\anaconda2\lib\site-packages\PyHum\io.py", line 31, in set_mmap_data
with open(os.path.normpath(os.path.join(sonpath,base+string)), 'w+') as ff:
IOError: [Errno 22] invalid mode ('w+') or filename: 'C:\Users\ryan.hefley\pyhum_test\test_data_port.dat'
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 472, in read
shape_port = io.set_mmap_data(sonpath, base, '_data_port.dat', 'int16', Zt)
File "C:\anaconda2\lib\site-packages\PyHum\io.py", line 31, in set_mmap_data
with open(os.path.normpath(os.path.join(sonpath,base+string)), 'w+') as ff:
IOError: [Errno 22] invalid mode ('w+') or filename: 'C:\Users\ryan.hefley\pyhum_test\test_data_port.dat'
memory-mapping failed in sliding window - trying memory intensive version
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 472, in read
shape_port = io.set_mmap_data(sonpath, base, '_data_port.dat', 'int16', Zt)
File "C:\anaconda2\lib\site-packages\PyHum\io.py", line 31, in set_mmap_data
with open(os.path.normpath(os.path.join(sonpath,base+string)), 'w+') as ff:
IOError: [Errno 22] invalid mode ('w+') or filename: 'C:\Users\ryan.hefley\pyhum_test\test_data_port.dat'
memory-mapping failed in sliding window - trying memory intensive version
memory-mapping failed in sliding window - trying memory intensive version
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 667, in read
x, bed = humutils.auto_bedpick(ft, dep_m, chunkmode, port_fp, c)
File "C:\anaconda2\lib\site-packages\PyHum\utils.py", line 82, in auto_bedpick
imu.append(port_fp[np.max([0,int(np.min(bed))-buff]):int(np.max(bed))+buff,:])
File "C:\anaconda2\lib\site-packages\numpy\core\memmap.py", line 335, in getitem
res = super(memmap, self).getitem(index)
TypeError: slice indices must be integers or None or have an index method
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
memory-mapping failed in sliding window - trying memory intensive version
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 472, in read
shape_port = io.set_mmap_data(sonpath, base, '_data_port.dat', 'int16', Zt)
File "C:\anaconda2\lib\site-packages\PyHum\io.py", line 31, in set_mmap_data
with open(os.path.normpath(os.path.join(sonpath,base+string)), 'w+') as ff:
IOError: [Errno 22] invalid mode ('w+') or filename: 'C:\Users\ryan.hefley\pyhum_test\test_data_port.dat'
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 472, in read
shape_port = io.set_mmap_data(sonpath, base, '_data_port.dat', 'int16', Zt)
File "C:\anaconda2\lib\site-packages\PyHum\io.py", line 31, in set_mmap_data
with open(os.path.normpath(os.path.join(sonpath,base+string)), 'w+') as ff:
IOError: [Errno 22] invalid mode ('w+') or filename: 'C:\Users\ryan.hefley\pyhum_test\test_data_port.dat'
memory-mapping failed in sliding window - trying memory intensive version
memory-mapping failed in sliding window - trying memory intensive version
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
memory-mapping failed in sliding window - trying memory intensive version
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 472, in read
shape_port = io.set_mmap_data(sonpath, base, '_data_port.dat', 'int16', Zt)
File "C:\anaconda2\lib\site-packages\PyHum\io.py", line 31, in set_mmap_data
with open(os.path.normpath(os.path.join(sonpath,base+string)), 'w+') as ff:
IOError: [Errno 22] invalid mode ('w+') or filename: 'C:\Users\ryan.hefley\pyhum_test\test_data_port.dat'
memory-mapping failed in sliding window - trying memory intensive version
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 667, in read
x, bed = humutils.auto_bedpick(ft, dep_m, chunkmode, port_fp, c)
File "C:\anaconda2\lib\site-packages\PyHum\utils.py", line 82, in auto_bedpick
imu.append(port_fp[np.max([0,int(np.min(bed))-buff]):int(np.max(bed))+buff,:])
File "C:\anaconda2\lib\site-packages\numpy\core\memmap.py", line 335, in getitem
res = super(memmap, self).getitem(index)
TypeError: slice indices must be integers or None or have an index method
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
memory-mapping failed in sliding window - trying memory intensive version
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 472, in read
shape_port = io.set_mmap_data(sonpath, base, '_data_port.dat', 'int16', Zt)
File "C:\anaconda2\lib\site-packages\PyHum\io.py", line 31, in set_mmap_data
with open(os.path.normpath(os.path.join(sonpath,base+string)), 'w+') as ff:
IOError: [Errno 22] invalid mode ('w+') or filename: 'C:\Users\ryan.hefley\pyhum_test\test_data_port.dat'
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 472, in read
shape_port = io.set_mmap_data(sonpath, base, '_data_port.dat', 'int16', Zt)
File "C:\anaconda2\lib\site-packages\PyHum\io.py", line 31, in set_mmap_data
with open(os.path.normpath(os.path.join(sonpath,base+string)), 'w+') as ff:
IOError: [Errno 22] invalid mode ('w+') or filename: 'C:\Users\ryan.hefley\pyhum_test\test_data_port.dat'
memory-mapping failed in sliding window - trying memory intensive version
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 472, in read
shape_port = io.set_mmap_data(sonpath, base, '_data_port.dat', 'int16', Zt)
File "C:\anaconda2\lib\site-packages\PyHum\io.py", line 31, in set_mmap_data
with open(os.path.normpath(os.path.join(sonpath,base+string)), 'w+') as ff:
IOError: [Errno 22] invalid mode ('w+') or filename: 'C:\Users\ryan.hefley\pyhum_test\test_data_port.dat'
memory-mapping failed in sliding window - trying memory intensive version
memory-mapping failed in sliding window - trying memory intensive version
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 667, in read
x, bed = humutils.auto_bedpick(ft, dep_m, chunkmode, port_fp, c)
File "C:\anaconda2\lib\site-packages\PyHum\utils.py", line 82, in auto_bedpick
imu.append(port_fp[np.max([0,int(np.min(bed))-buff]):int(np.max(bed))+buff,:])
File "C:\anaconda2\lib\site-packages\numpy\core\memmap.py", line 335, in getitem
res = super(memmap, self).getitem(index)
TypeError: slice indices must be integers or None or have an index method
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
memory-mapping failed in sliding window - trying memory intensive version
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 472, in read
shape_port = io.set_mmap_data(sonpath, base, '_data_port.dat', 'int16', Zt)
File "C:\anaconda2\lib\site-packages\PyHum\io.py", line 31, in set_mmap_data
with open(os.path.normpath(os.path.join(sonpath,base+string)), 'w+') as ff:
IOError: [Errno 22] invalid mode ('w+') or filename: 'C:\Users\ryan.hefley\pyhum_test\test_data_port.dat'
memory-mapping failed in sliding window - trying memory intensive version
memory-mapping failed in sliding window - trying memory intensive version
memory-mapping failed in sliding window - trying memory intensive version
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 472, in read
shape_port = io.set_mmap_data(sonpath, base, '_data_port.dat', 'int16', Zt)
File "C:\anaconda2\lib\site-packages\PyHum\io.py", line 31, in set_mmap_data
with open(os.path.normpath(os.path.join(sonpath,base+string)), 'w+') as ff:
IOError: [Errno 22] invalid mode ('w+') or filename: 'C:\Users\ryan.hefley\pyhum_test\test_data_port.dat'
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 667, in read
x, bed = humutils.auto_bedpick(ft, dep_m, chunkmode, port_fp, c)
File "C:\anaconda2\lib\site-packages\PyHum\utils.py", line 82, in auto_bedpick
imu.append(port_fp[np.max([0,int(np.min(bed))-buff]):int(np.max(bed))+buff,:])
File "C:\anaconda2\lib\site-packages\numpy\core\memmap.py", line 335, in getitem
res = super(memmap, self).getitem(index)
TypeError: slice indices must be integers or None or have an index method
memory-mapping failed in sliding window - trying memory intensive version
memory-mapping failed in sliding window - trying memory intensive version
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 667, in read
x, bed = humutils.auto_bedpick(ft, dep_m, chunkmode, port_fp, c)
File "C:\anaconda2\lib\site-packages\PyHum\utils.py", line 82, in auto_bedpick
imu.append(port_fp[np.max([0,int(np.min(bed))-buff]):int(np.max(bed))+buff,:])
File "C:\anaconda2\lib\site-packages\numpy\core\memmap.py", line 335, in getitem
res = super(memmap, self).getitem(index)
TypeError: slice indices must be integers or None or have an index method
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'

Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Input file is C:\Users\ryan.hefley\pyhum_test\test.DATDraft: 0.3

Celerity of sound: 1450.0 m/s
Son files are in C:\Users\ryan.hefley\pyhum_testPort and starboard will be flipped

cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Bed picking is autoData is from the 998 series

Bearing will be calculated from coordinates
Only 1 chunk will be producedBearing will be filtered

Checking the epsg code you have chosen for compatibility with Basemap ... Data is from the 998 series

Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 472, in read
shape_port = io.set_mmap_data(sonpath, base, '_data_port.dat', 'int16', Zt)
File "C:\anaconda2\lib\site-packages\PyHum\io.py", line 31, in set_mmap_data
with open(os.path.normpath(os.path.join(sonpath,base+string)), 'w+') as ff:
IOError: [Errno 22] invalid mode ('w+') or filename: 'C:\Users\ryan.hefley\pyhum_test\test_data_port.dat'
memory-mapping failed in sliding window - trying memory intensive version
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 472, in read
shape_port = io.set_mmap_data(sonpath, base, '_data_port.dat', 'int16', Zt)
File "C:\anaconda2\lib\site-packages\PyHum\io.py", line 31, in set_mmap_data
with open(os.path.normpath(os.path.join(sonpath,base+string)), 'w+') as ff:
IOError: [Errno 22] invalid mode ('w+') or filename: 'C:\Users\ryan.hefley\pyhum_test\test_data_port.dat'
memory-mapping failed in sliding window - trying memory intensive version
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 472, in read
shape_port = io.set_mmap_data(sonpath, base, '_data_port.dat', 'int16', Zt)
File "C:\anaconda2\lib\site-packages\PyHum\io.py", line 31, in set_mmap_data
with open(os.path.normpath(os.path.join(sonpath,base+string)), 'w+') as ff:
IOError: [Errno 22] invalid mode ('w+') or filename: 'C:\Users\ryan.hefley\pyhum_test\test_data_port.dat'
memory-mapping failed in sliding window - trying memory intensive version
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
memory-mapping failed in sliding window - trying memory intensive version
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 667, in read
x, bed = humutils.auto_bedpick(ft, dep_m, chunkmode, port_fp, c)
File "C:\anaconda2\lib\site-packages\PyHum\utils.py", line 82, in auto_bedpick
imu.append(port_fp[np.max([0,int(np.min(bed))-buff]):int(np.max(bed))+buff,:])
File "C:\anaconda2\lib\site-packages\numpy\core\memmap.py", line 335, in getitem
res = super(memmap, self).getitem(index)
TypeError: slice indices must be integers or None or have an index method
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
memory-mapping failed in sliding window - trying memory intensive version
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 472, in read
shape_port = io.set_mmap_data(sonpath, base, '_data_port.dat', 'int16', Zt)
File "C:\anaconda2\lib\site-packages\PyHum\io.py", line 31, in set_mmap_data
with open(os.path.normpath(os.path.join(sonpath,base+string)), 'w+') as ff:
IOError: [Errno 22] invalid mode ('w+') or filename: 'C:\Users\ryan.hefley\pyhum_test\test_data_port.dat'
memory-mapping failed in sliding window - trying memory intensive version
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 472, in read
shape_port = io.set_mmap_data(sonpath, base, '_data_port.dat', 'int16', Zt)
File "C:\anaconda2\lib\site-packages\PyHum\io.py", line 31, in set_mmap_data
with open(os.path.normpath(os.path.join(sonpath,base+string)), 'w+') as ff:
IOError: [Errno 22] invalid mode ('w+') or filename: 'C:\Users\ryan.hefley\pyhum_test\test_data_port.dat'
memory-mapping failed in sliding window - trying memory intensive version
memory-mapping failed in sliding window - trying memory intensive version
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 667, in read
x, bed = humutils.auto_bedpick(ft, dep_m, chunkmode, port_fp, c)
File "C:\anaconda2\lib\site-packages\PyHum\utils.py", line 82, in auto_bedpick
imu.append(port_fp[np.max([0,int(np.min(bed))-buff]):int(np.max(bed))+buff,:])
File "C:\anaconda2\lib\site-packages\numpy\core\memmap.py", line 335, in getitem
res = super(memmap, self).getitem(index)
TypeError: slice indices must be integers or None or have an index method
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
memory-mapping failed in sliding window - trying memory intensive version
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
memory-mapping failed in sliding window - trying memory intensive version
Traceback (most recent call last):
File "", line 1, in
File "C:\anaconda2\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\anaconda2\lib\multiprocessing\forking.py", line 510, in prepare
'parents_main', file, path_name, etc
File "C:\PythonCode\PyHum\test.py", line 2, in
PyHum.test.dotest()
File "C:\anaconda2\lib\site-packages\PyHum\test.py", line 136, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "C:\anaconda2\lib\site-packages\PyHum_pyhum_read.py", line 667, in read
x, bed = humutils.auto_bedpick(ft, dep_m, chunkmode, port_fp, c)
File "C:\anaconda2\lib\site-packages\PyHum\utils.py", line 82, in auto_bedpick
imu.append(port_fp[np.max([0,int(np.min(bed))-buff]):int(np.max(bed))+buff,:])
File "C:\anaconda2\lib\site-packages\numpy\core\memmap.py", line 335, in getitem
res = super(memmap, self).getitem(index)
TypeError: slice indices must be integers or None or have an index method
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...
Directory not copied. Error: [Error 183] Cannot create a file when that file already exists: 'C:\Users\ryan.hefley\pyhum_test'
Input file is C:\Users\ryan.hefley\pyhum_test\test.DAT
Son files are in C:\Users\ryan.hefley\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
something went wrong with the parallelised version of pyread ...

Specify output file and name to test outcome of different variables (resolution, mode)

Hello,

I have been able to generate PyHum.map_texture maps of our humminbird sonar data, but the output is different from what I expected and somewhat indecipherable.

image

Is it possible to clean up the image using resolution or mode, or is what I am seeing the true distribution of sediment size across the river?

Finally, can I save the output name of my PyHum.map_texture file to test different variables of resolution, for example, against one another?

Thank you,
Liam

PyHum.map_texture memory error

PyHum throws a memory error during the PyHum.dotest() command:

memoryerrorpyhum

I am running Windows 7 64 bit with manually installed python libraries. When I attempt to run only the PyHum.map_texture module, the following error is thrown:

memoryerrorpyhum2

Run time

Hi PyHum developers,

I have a quick question about the run time. When I do the test, it take me more then a half day but the test is still not finished. Is it because that my computer is too slow? And what is the recommended computer?

My computer is 1.3 Ghz inter Core i5 and 4GB 1600MHz DDR3 Mac.

IOError[Error 22]

Dear PyHum developer,

I met a new runtime error which is saying:
File "_pyhum_read.py" line 511, in read
with open(os.path.normpath(os.path.join(sonpath,base+'_data_download.dat')),'w+') as ff:
IOError: [Error 22] invalid mode('w+') or filename: 'Myfile_data_dwnlow.dat'

This only happen for some of the humfile and sonpath not for all of them .

Would you please tell me what is going on? Thank you.

PyHum.dotest() failing

tried running the dotest() after installing version 1.3.14 into a new Anaconda environment on Windows, and the following error message is coming up:

NameError: gloabl name 'jv' is not defined

here is as much of the output as I could save:

PyHum_dotest_error.txt

I also would note that for a Windows install with Anaconda you need to specify the version of scipy as 0.16.0 -> conda install scipy==0.16.0

Extracting Information from .SON and/or .IDX file

This is not a technical issue per se, but the reason that I have stumbled upon PyHum is because of a problem that one of our engineers was having. He has .SON and .IDX files (B001.IDX,B001.SON,B002.IDX,B002.SON,B003.IDX,B003.SON,B004.IDX,B004.SON)

This was done on the Arkansas River, in Arkansas which our coordinate system we use is NAD 83 15N

I was showing him PyHum, and he really doesn't care to much about the graphs and images, but rather he would like to extract the following information {lat, long, depth}

How could I use PyHum to extract this information in a CSV that allows him to view this information so we can plot in ArcGIS?

Thank you so much, sorry for my lack of knowledge but I tried to dig into the documentation, and was getting confused fairly quickly.

ValueError on Map_Texture as a result of X & Y values of []

Hello @dbuscombe-usgs @danhamill

I am getting a "ValueError: zero-size array to reduction operation minimum which has no identity" while using map_texture. The error happens regularly at the third "getting point cloud" of my sonar file, which I assume is the third chunk. After putting "print X" & "print Y" above the offending line I determined that the cause of this error is values of [] for both X and Y. Can this chunk be bypassed? I am currently using a chunk delineator of 5 degrees.

valueerrorsource

Thank you,
Liam

PyHum.map_texture

Error message while trying to get spatially referenced texture maps for PyHum on Windows 7 using Spyder IDE: "TypeError: coercing to Unicode: need string or buffer, NoneType found". Here is full error output:

Traceback (most recent call last):

File "", line 1, in
runfile('C:/Users/user/CodingWorkingFiles/Tutorial/BuscombeFix00219.py', wdir='C:/Users/user/CodingWorkingFiles/Tutorial')

File "C:\Users\user\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 699, in runfile
execfile(filename, namespace)

File "C:\Users\user\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 74, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)

File "C:/Users/user/CodingWorkingFiles/Tutorial/BuscombeFix00219.py", line 93, in
dotest()

File "C:/Users/user/CodingWorkingFiles/Tutorial/BuscombeFix00219.py", line 77, in dotest
PyHum.texture(humfile, sonpath, win, shift, doplot, density, numclasses, maxscale, notes)

File "C:\Users\user\Anaconda2\lib\site-packages\PyHum_pyhum_texture.py", line 389, in texture
wc = get_kclass(class_fp[p].copy(), numclasses)

File "C:\Users\user\Anaconda2\lib\site-packages\PyHum_pyhum_texture.py", line 456, in get_kclass
wc, values = humutils.cut_kmeans(Sk,numclasses+1)

File "C:\Users\user\Anaconda2\lib\site-packages\PyHum\utils.py", line 492, in cut_kmeans
wc = da.from_array(wc, chunks=1000) #dask implementation

File "C:\Users\user\Anaconda2\lib\site-packages\dask\array\core.py", line 1428, in from_array
name = name or 'from-array-' + tokenize(x, chunks)

File "C:\Users\user\Anaconda2\lib\site-packages\dask\base.py", line 240, in tokenize
return md5(str(tuple(map(normalize_token, args))).encode()).hexdigest()

File "C:\Users\user\Anaconda2\lib\site-packages\dask\utils.py", line 479, in call
return lkcls

File "C:\Users\user\Anaconda2\lib\site-packages\dask\base.py", line 205, in normalize_array
return x.filename, os.path.getmtime(x.filename), x.dtype, x.shape

File "C:\Users\user\Anaconda2\lib\genericpath.py", line 62, in getmtime
return os.stat(filename).st_mtime

TypeError: coercing to Unicode: need string or buffer, NoneType found

Here is full script to run PyHum:

-- coding: utf-8 --

"""
"""

import PyHum

def dotest():

humfile = 'E:\Sturgeon\2015Substrate\Humminbird\BeenUploaded\R00219.DAT'
sonpath = 'E:\Sturgeon\2015Substrate\Humminbird\BeenUploaded\R00219'

doplot = 1 #yes

reading specific settings

cs2cs_args = "epsg:32100" #Montana
bedpick = 1 # auto bed pick
c = 1450 # speed of sound fresh water
t = 0.108 # length of transducer
draft = 0.3 # draft in metres
flip_lr = 0 # flip port and starboard
model = 998 # humminbird model
cog = 1 # GPS course-over-ground used for heading
calc_bearing = 1 #no
filt_bearing = 0 #no
chunk = 65 # distance, 65m
#chunk = 'p1000' # pings, 1000
#chunk = 'h10' # heading deviation, 10 deg

correction specific settings

maxW = 1000 # rms output wattage
dofilt = 0 # 1 = apply a phase preserving filter (WARNING!! takes a very long time for large scans)
correct_withwater = 0 # don't retain water column in radiometric correction (1 = retains water column for radiomatric corrections)
ph = 7.0 # acidity on the pH scale
temp = 10.0 # water temperature in degrees Celsius
salinity = 0.0

for shadow removal

shadowmask = 0 #automatic shadow removal

for texture calcs

win = 100 # pixel window
shift = 10 # pixel shift
density = win/2
numclasses = 4 # number of discrete classes for contouring and k-means
maxscale = 20 # Max scale as inverse fraction of data length (for wavelet analysis)
notes = 4 # Notes per octave (for wavelet analysis)

for mapping

dogrid = 1 #yes
res = 0.2 # grid resolution in metres

if res==99, the program will automatically calc res from the spatial res of the scans

mode = 1 # gridding mode (simple nearest neighbour)
#mode = 2 # gridding mode (inverse distance weighted nearest neighbour)
#mode = 3 # gridding mode (gaussian weighted nearest neighbour)
dowrite = 0 #disable writing of point cloud data to file

nn = 128 #number of nearest neighbours for gridding (used if mode > 1)
influence = 1 #Radius of influence used in gridding. Cut off distance in meters
numstdevs = 5 #Threshold number of standard deviations in sidescan intensity per grid cell up to which to accept

for downward-looking echosounder echogram (e1-e2) analysis

beam = 20.0
transfreq = 200.0 # frequency (kHz) of downward looking echosounder
integ = 5
numclusters = 3 # number of acoustic classes to group observations

read data in SON files into PyHum memory mapped format (.dat)

PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, cog, chunk)

correct scans and remove water column

PyHum.correct(humfile, sonpath, maxW, doplot, dofilt, correct_withwater, ph, temp, salinity)

remove acoustic shadows (caused by distal acoustic attenuation or sound hitting shallows or shoreline)

PyHum.rmshadows(humfile, sonpath, win, shadowmask, doplot)

Calculate texture lengthscale maps using the method of Buscombe et al. (2015)

PyHum.texture(humfile, sonpath, win, shift, doplot, density, numclasses, maxscale, notes)

grid and map the scans

PyHum.map(humfile, sonpath, cs2cs_args, res, dowrite, mode, nn, influence, numstdevs)

res = 0.5 # grid resolution in metres
numstdevs = 5

grid and map the texture lengthscale maps

PyHum.map_texture(humfile, sonpath, cs2cs_args, dogrid, res, mode, nn, influence, numstdevs)

calculate and map the e1 and e2 acoustic coefficients from the downward-looking sonar

#PyHum.e1e2(humfile, sonpath, cs2cs_args, ph, temp, salinity, beam, transfreq, integ, numclusters, doplot)

if name == 'main':
dotest()

Thank you!

README.rst Virtual Environment setup

Under the Vitrual Envrionment heading in the README.rst page the line that says

python -c "import PyHum; PyHum.test()"

should read:

python -c "import PyHum; PyHum.dotest()"

Error: Invalid mode or filename

I am running into issues before PyHum gives me a map, see below error code:

image

I am using Windows 7 and Python 2.7. Here is a text file of the python script I am running

BuscombeFix.txt

Thank you for your help!

Error(s) in _pyhum_read and _pyhum_correct

I have installed PyHum and dependencies successfully on Ubuntu 16.04 using an Anaconda python environment. When I run the dotest() function, this is the output I get:

$ python -c "import PyHum; PyHum.test.dotest()"
Input file is /home/user/pyhum_test/test.DAT
Son files are in /home/user/pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Chunks based on distance of 100 m
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ... 
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
port sonar data will be parsed into 3.0, 99 m chunks
starboard sonar data will be parsed into 3.0, 99 m chunks
low-freq. sonar data will be parsed into 3.0, 99 m chunks
high-freq. sonar data will be parsed into 3.0, 99 m chunks
Processing took  46.6593990326 seconds to analyse
Done!
Input file is /home/user/pyhum_test/test.DAT
Sonar file path is /home/user/pyhum_test
Max. transducer power is 1000.0 W
pH is 7.0
Temperature is 10.0
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/user/anaconda2/lib/python2.7/site-packages/PyHum/test.py", line 134, in dotest
    PyHum.correct(humfile, sonpath, maxW, doplot, dofilt, correct_withwater, ph, temp, salinity)
  File "/home/user/anaconda2/lib/python2.7/site-packages/PyHum/_pyhum_correct.py", line 528, in correct
    plot_dwnlow_scans(low_fp[p], dist_m, shape_low, ft, sonpath, p)
  File "/home/user/anaconda2/lib/python2.7/site-packages/PyHum/_pyhum_correct.py", line 848, in plot_dwnlow_scans
    plt.axis('normal'); plt.axis('tight')
  File "/home/user/anaconda2/lib/python2.7/site-packages/matplotlib/pyplot.py", line 1537, in axis
    return gca().axis(*v, **kwargs)
  File "/home/user/anaconda2/lib/python2.7/site-packages/matplotlib/axes/_base.py", line 1583, in axis
    self.autoscale_view(tight=False)
  File "/home/user/anaconda2/lib/python2.7/site-packages/matplotlib/axes/_base.py", line 2322, in autoscale_view
    'minposy', self.yaxis, self._ymargin, y_stickies, self.set_ybound)
  File "/home/user/anaconda2/lib/python2.7/site-packages/matplotlib/axes/_base.py", line 2301, in handle_single_axis
    do_lower_margin = not np.any(np.isclose(x0, stickies))
  File "/home/user/anaconda2/lib/python2.7/site-packages/numpy/core/numeric.py", line 2451, in isclose
    yfin = isfinite(y)
TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''

I understand this is a matplotlib error on the backend but from some basic googling it looks like it may stem from an issue related to numpy array construction or reference on the PyHum end. Any suggestions for a fix? I'd be happy to provide further information.

Note: After updating Anaconda it appears as though I broke it even more. Now this is the output of the dotest() function:

$ python -c "import PyHum; PyHum.test.dotest()"
Directory not copied. Error: [Errno 17] File exists: '/home/user/pyhum_test'
Input file is /home/user/pyhum_test/test.DAT
Son files are in /home/user/pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Chunks based on distance of 100 m
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ... 
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
port sonar data will be parsed into 3.0, 99 m chunks
starboard sonar data will be parsed into 3.0, 99 m chunks
low-freq. sonar data will be parsed into 3.0, 99 m chunks
high-freq. sonar data will be parsed into 3.0, 99 m chunks
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/user/anaconda2/lib/python2.7/site-packages/PyHum/test.py", line 131, in dotest
    PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
  File "/home/user/anaconda2/lib/python2.7/site-packages/PyHum/_pyhum_read.py", line 667, in read
    x, bed = humutils.auto_bedpick(ft, dep_m, chunkmode, port_fp, c)
  File "/home/user/anaconda2/lib/python2.7/site-packages/PyHum/utils.py", line 79, in auto_bedpick
    imu.append(port_fp[k][np.max([0,int(np.min(bed))-buff]):int(np.max(bed))+buff,:])
  File "/home/user/anaconda2/lib/python2.7/site-packages/numpy/core/memmap.py", line 335, in __getitem__
    res = super(memmap, self).__getitem__(index)
TypeError: slice indices must be integers or None or have an __index__ method

I'm rolling back to my previous Anaconda setup to try and gain some insight into the first traceback issue but thought I'd include the second error as well. I can open a separate issue regarding the second traceback later.

MEGA data processing errors

There are a few errors that pop up when processing MEGA data, not sure if its only for this transducer set-up.

The first is that the horizontal distance of the output processed data is much less than actual. For instance the right bank is ~21m from the boat track in the attached file, where as PyHum is reading it as ~7m. bed_2picks0. This does not happen if I set the transducer to the incorrect "1199" (instead of "'mega'"), whereby the correct horizontal distance is returned.

Shadow removal does seem to remove anything beyond about 10m from the transducer where as the data was collected out to 30m. Contouring only returns a thin strip at the edge of the data area (see attached).
r00170class_contours0

The second issue is that I get a memory error in the gridding process where by it keeps trying larger and larger resolutions until infinity, regardless of having "99" or "1" in the res setting. I have to hit exit/enter in the CMD terminal to get it to stop. This also happens when I used the (incorrect) 1199 transducer type in the script.

I dont know if its relevant but I get the "memory mapping failed in sliding window - trying memory intensive version" in the initial read module. It seems to continue from this point OK though. Using a Windows 10 machine, i7-8550 with 16GB RAM.

Does anyone know the correct transducer array length for the MEGA units (XM 9 20 MSI T)? It physically measures about 16-18cm but I cant find details about the internal array length.
bed_pick0

Script below:

-- coding: utf-8 --

syntax for script without file path included: (pyhum) C:\Users\philw>python bhp_pyhum_pw1.py -i C:\Users\philw\BHPsonar\R00170.DAT -s C:\Users\philw\BHPsonar\R00170\

"""
Spyder Editor

This is a temporary script file.
"""
import sys, getopt

from Tkinter import Tk
from tkFileDialog import askopenfilename, askdirectory

import PyHum
import os

if name == 'main':

argv = sys.argv[1:]
humfile = ''; sonpath = ''

# parse inputs to variables
try:
   opts, args = getopt.getopt(argv,"hi:s:")
except getopt.GetoptError:
     print 'error'
     sys.exit(2)
for opt, arg in opts:
   if opt == '-h':
     print 'help'
     sys.exit()
   elif opt in ("-i"):
      humfile = arg
   elif opt in ("-s"):
      sonpath = arg

# prompt user to supply file if no input file given
if not humfile:
   print 'An input file is required!!!!!!'
   Tk().withdraw() # we don't want a full GUI, so keep the root window from appearing
   humfile = askopenfilename(filetypes=[("DAT files","*.DAT")]) 

# prompt user to supply directory if no input sonpath is given
if not sonpath:
   print 'A *.SON directory is required!!!!!!'
   Tk().withdraw() # we don't want a full GUI, so keep the root window from appearing
   sonpath = askdirectory() 

# print given arguments to screen and convert data type where necessary
if humfile:
   print 'Input file is %s' % (humfile)

if sonpath:
   print 'Son files are in %s' % (sonpath)
             
doplot = 1 #yes

# reading specific settings
cs2cs_args = "epsg:4326" #WGS84
bedpick = 1 # auto bed pick
c = 1450 # speed of sound fresh water
t = 0.108 # length of transducer for MEGA unit not confirmed
draft = 0.3 # draft in metres
flip_lr = 0 # flip port and starboard
model = 'mega' # humminbird model
calc_bearing = 0 #no
filt_bearing = 0 #no
chunk = 'd100' # distance, 100m
#chunk = 'p1000' # pings, 1000
#chunk = 'h10' # heading deviation, 10 deg
 
# correction specific settings
maxW = 1000 # rms output wattage for MEGA unit
dofilt = 0 # 1 = apply a phase preserving filter (WARNING!! takes a very long time for large scans)
correct_withwater = 0 # don't retain water column in radiometric correction (1 = retains water column for radiomatric corrections)
ph = 7.0 # acidity on the pH scale
temp = 26.0 # water temperature in degrees Celsius 20 at S4 dike and 27 at Mascarenhas in Oct 2017
salinity = 0.0 #salinity = 0.0


# for shadow removal
shadowmask = 0 #auto shadow removal
win =10

# for mapping
res = 99 # grid resolution in metres
# if res==99, the program will automatically calc res from the spatial res of the scans
mode = 1 # gridding mode (simple nearest neighbour)
#mode = 2 # gridding mode (inverse distance weighted nearest neighbour)
#mode = 3 # gridding mode (gaussian weighted nearest neighbour)
dowrite = 0 #disable writing of point cloud data to file
scalemax = 60 # max color scale value (60 is a good place to start)

## read data in SON files into PyHum memory mapped format (.dat)
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog

## correct scans and remove water column
PyHum.correct(humfile, sonpath, maxW, doplot, dofilt, correct_withwater, ph, temp, salinity)

## remove acoustic shadows (caused by distal acoustic attenuation or sound hitting shallows or shoreline)
PyHum.rmshadows(humfile, sonpath, win, shadowmask, doplot)

## Calculate texture lengthscale maps using the method of Buscombe et al. (2015)
win = 10
numclasses = 4
PyHum.texture2(humfile, sonpath, win, doplot, numclasses)

## grid and map the scans
cs2cs_args = "epsg:4326"
res = 99
mode = 1
nn = 64
numstdevs = 4
use_uncorrected = 0
scalemax = 60
PyHum.map(humfile, sonpath, cs2cs_args, res, mode, nn, numstdevs, use_uncorrected, scalemax) #dowrite, 

Spatial datum of ASCII point cloud

Hi all,

I am attempting to input the ASCII point cloud into ArcGIS for texture analysis. The kml is good to look at, but to analyze the data I need to input the point cloud. I know the coordinate system is WGS 1984, but what is the projected coordinate system of the ASCII output? I've tried UTM 10N (I am in Northern California), NAD 1987 and NAD 1983.

Thanks,
Liam

Issue PyHum.dotest()

Hi,

I'm using win 10 and I am running PyHum via Anaconda and a Spyder IDE.
When I import PyHum I get this warning:

C:\Users\Sieglinde\Anaconda2\envs\PyHumm\lib\site-packages\matplotlib_init_.py:1405: UserWarning:
This call to matplotlib.use() has no effect because the backend has already
been chosen; matplotlib.use() must be called before pylab, matplotlib.pyplot,
or matplotlib.backends is imported for the first time.

warnings.warn(_use_error_msg)

When I execute PyHum.dotest() I get the following:

Directory not copied. Error: [Error 183] Eine Datei kann nicht erstellt werden, wenn sie bereits vorhanden ist: 'C:\Users\Sieglinde\pyhum_test'
Input file is C:\Users\Sieglinde\pyhum_test\test.DAT
Son files are in C:\Users\Sieglinde\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Chunks based on distance of 100 m
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
port sonar data will be parsed into 3.0, 99 m chunks
starboard sonar data will be parsed into 3.0, 99 m chunks
memory-mapping failed in sliding window - trying memory intensive version
low-freq. sonar data will be parsed into 3.0, 99 m chunks
high-freq. sonar data will be parsed into 3.0, 99 m chunks
memory-mapping failed in sliding window - trying memory intensive version
Traceback (most recent call last):

File "", line 1, in
PyHum.dotest()

File "C:\Users\Sieglinde\Anaconda2\envs\PyHumm\lib\site-packages\PyHum\test.py", line 131, in dotest
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog

File "C:\Users\Sieglinde\Anaconda2\envs\PyHumm\lib\site-packages\PyHum_pyhum_read.py", line 667, in read
x, bed = humutils.auto_bedpick(ft, dep_m, chunkmode, port_fp, c)

File "C:\Users\Sieglinde\Anaconda2\envs\PyHumm\lib\site-packages\PyHum\utils.py", line 79, in auto_bedpick
imu.append(port_fp[k][np.max([0,int(np.min(bed))-buff]):int(np.max(bed))+buff,:])

File "C:\Users\Sieglinde\Anaconda2\envs\PyHumm\lib\site-packages\numpy\core\memmap.py", line 335, in getitem
res = super(memmap, self).getitem(index)

TypeError: slice indices must be integers or None or have an index method

Thanks a lot for your help.

Have a good one!
Christian

Spatial location differences with SonarTRX

Hi Dan,

I was wondering if you could shed some light on some spatial location differences with the imagery produced by PyHum compared with imagery processed with SonarTRX. Using PyHum and provided test data and all defaults in the test script, I created the following imagery in ArcGIS:

capture
To create that image, I took the raw ss ascii file, created a point feature class in ArcGIS, and used the natural neighbor tool with pixel size of 0.1 m to create the raster. It seems to be comparable with imagery produced by PyHum.

I then took the sonar data and processed with SonarTRX to create the following imagery:

capture

When I overlay the two images, you can start to see the differences:

capture

When you look at specific objects in the two images, you can see the differences better:

capture

compared to this:

capture

I can understand and expect to see differences between the two processing methods, but when I look closer at the PyHum output, I begin to notice that the PyHum data is maybe too sensitive to the heading, or perhaps there is an oversmoothing of the heading in SonarTRX.

In the next image, a portion of the raw ping data is shown compared to the trackline and image footprints of the imagery produced by SonarTRX. It seems like the image footprints can be thought of like a ping. You can see that the pyhum data is following the same trackline but the pings are kind of fanning out in comparison to the SonarTRX data:

capture

And the same extent with all of the ping points loaded:

capture

Do you have any ideas why I might be seeing these differences?

Thanks for your time!

Issue with correct Module plot_dwnlow_scans

I moved this from #43 because it is a different issue.

When running the test, a TypeError gets thrown in the correct module.

Full traceback:

(C:\Users\Phillip\Anaconda2) C:\Users\Phillip>python -c "import PyHum; PyHum.test.dotest()"
Input file is C:\Users\Phillip\pyhum_test\test.DAT
Son files are in C:\Users\Phillip\pyhum_test
cs2cs arguments are epsg:26949
Draft: 0.3
Celerity of sound: 1450.0 m/s
Port and starboard will be flipped
Transducer length is 0.108 m
Bed picking is auto
Chunks based on distance of 100 m
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
port sonar data will be parsed into 3.0, 99 m chunks
starboard sonar data will be parsed into 3.0, 99 m chunks
memory-mapping failed in sliding window - trying memory intensive version
low-freq. sonar data will be parsed into 3.0, 99 m chunks
high-freq. sonar data will be parsed into 3.0, 99 m chunks
memory-mapping failed in sliding window - trying memory intensive version
Processing took 49.4254002521 seconds to analyse
Done!
Input file is C:\Users\Phillip\pyhum_test\test.DAT
Sonar file path is C:\Users\Phillip\pyhum_test
Max. transducer power is 1000.0 W
pH is 7.0
Temperature is 10.0
Traceback (most recent call last):
File "", line 1, in 
File "C:\Users\Phillip\AppData\Roaming\Python\Python27\site-packages\PyHum\test.py", line 134, in dotest
PyHum.correct(humfile, sonpath, maxW, doplot, dofilt, correct_withwater, ph, temp, salinity)
File "C:\Users\Phillip\AppData\Roaming\Python\Python27\site-packages\PyHum_pyhum_correct.py", line 528, in correct
plot_dwnlow_scans(low_fp[p], dist_m, shape_low, ft, sonpath, p)
File "C:\Users\Phillip\AppData\Roaming\Python\Python27\site-packages\PyHum_pyhum_correct.py", line 848, in plot_dwnlow_scans
plt.axis('normal'); plt.axis('tight')
File "C:\Users\Phillip\Anaconda2\lib\site-packages\matplotlib\pyplot.py", line 1537, in axis
return gca().axis(*v, **kwargs)
File "C:\Users\Phillip\Anaconda2\lib\site-packages\matplotlib\axes_base.py", line 1583, in axis
self.autoscale_view(tight=False)
File "C:\Users\Phillip\Anaconda2\lib\site-packages\matplotlib\axes_base.py", line 2322, in autoscale_view
'minposy', self.yaxis, self._ymargin, y_stickies, self.set_ybound)
File "C:\Users\Phillip\Anaconda2\lib\site-packages\matplotlib\axes_base.py", line 2301, in handle_single_axis
do_lower_margin = not np.any(np.isclose(x0, stickies))
File "C:\Users\Phillip\Anaconda2\lib\site-packages\numpy\core\numeric.py", line 2451, in isclose
yfin = isfinite(y)
TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''

@philw6 could you provide some information about your machine? Matplotlib version, anaconda distribution, and numpy version?

Do I have to run all modules if only 1 module output is necessary?

I'm not sure if this post belongs under the issues tab; if not, then I apologize. However I think this is an issue that may apply to other people.

If I only need the output from PyHum.map_texture, which modules do I also need to run to get PyHum.map_texture? For example, can I leave out PyHum.map or PyHum.texture and if so, should I change any of the variables to accomodate this change? I am running a large batch of humminbird files (~150) and would like to reduce processing time if possible.

Thank you,
Liam

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.