Giter VIP home page Giter VIP logo

spectral's Introduction

Spectral Python (SPy)

image

Join the chat at https://gitter.im/spectralpython/spectral

image

image

image

image

image

Spectral Python (SPy) is a pure Python module for processing hyperspectral image data (imaging spectroscopy data). It has functions for reading, displaying, manipulating, and classifying hyperspectral imagery. Full details about the package are on the web site.

Installation Instructions

The latest release is always hosted on PyPI, so if you have pip installed, you can install SPy from the command line with

pip install spectral

Packaged distributions are also hosted at PyPI and GitHub so you can download and unpack the latest zip/tarball, then type

python setup.py install

To install the latest development version, download or clone the git repository and install as above. No explicit installation is required so you can simply access (or symlink) the spectral module within the source tree.

Finally, up-to-date guidance on how to install via the popular conda package and environment management system can be found at official conda-forge documentation.

Unit Tests

To run the suite of unit tests, you must have numpy installed and you must have the sample data files downloaded to the current directory (or one specified by the SPECTRAL_DATA environment variable). To run the unit tests, type

python -m spectral.tests.run

Dependencies

Using SPy interactively with its visualization capabilities requires IPython and several other packages (depending on the features used). See the web site for details.

spectral's People

Contributors

donm avatar gemmaellen avatar gitter-badger avatar kdbanman avatar kidpixo avatar kormang avatar lewismc avatar linkid avatar mhmdjouni avatar rajathkmp avatar tboggs avatar toaarnio avatar wwlswj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spectral's Issues

Keyword "interleave" has no effect in envi.create_image

The interleave keyword has no effect when passed to envi.create_image. Currently, the only way to set the interleave is to pass it in the metadata argument. This bug was reported by Daniel Scheffler, who proposed the following fix:

... add these two lines below line 729 in the envi.py file (.../spectral/io/):

if 'interleave' in kwargs:  
    metadata['interleave'] = kwargs['interleave']

AttributeError: __enter__, __exit__

I get for the following code

with spectral.io.envi.read_envi_header(filepath) as f:
        print(f)

an AttributeError depending on the Python version. This seems to be a problem with the with-statement as pointed out here.

  • Python 3.x error message:
    AttributeError: __enter__

  • Python 2.x error message:
    AttributeError: __exit__

wrong assignment of bands.band_quantity in envi.open

The function /io/envi.open assigns :

img.bands.band_quantity = "Wavelength"

but this is not always the case.

I'm working with some envi cube only providing geometrical information to other data cubes, thus not having band_quantity in the header. I get all the img.bands. empty , but img.bands.band_quantity is Wavelength even if isn't defined.

I suggest to add this metadata1 to the output spectral.SpyFile olny if they are effectively present in the header, or it is bad when we are going to write a new file?

Any way, the assumption for img.bands.band_quantity is in general wrong, in my opinion.


1. Metadata list I found:

img.bands.band_quantity
img.bands.bandwidth_stdevs
img.bands.centers
img.bands.band_unit       
img.bands.bandwidths      
img.bands.centers_stdevs 

Error at computing mnf

I get an error at computing mnf.
I've made test images in envi format by arranging
2 sets of 69 spectra as 2 images of 1 row x 69 columns x 50 bands.
One image (testradVNIR) is for target radiance and the other
(testdarkVNIR) for the
dark noise. I have more bands but I have selected 50
to keep it smaller and avoid having more bands than spectra.
Test images are available here
https://dl.dropboxusercontent.com/u/3180464/test.zip

This is what I do:

from spectral import *
data1 = envi.open('/media/alobo/LACIE500/Spectroradiometry/ADRIA/Tests/MNFdenoising/testradVNIR.hdr',
'/media/alobo/LACIE500/Spectroradiometry/ADRIA/Tests/MNFdenoising/testradVNIR.envi').load()
data2 = envi.open('/media/alobo/LACIE500/Spectroradiometry/ADRIA/Tests/MNFdenoising/testdarkVNIR.hdr',
'/media/alobo/LACIE500/Spectroradiometry/ADRIA/Tests/MNFdenoising/testdarkVNIR.envi').load()

signal = calc_stats(data1)
noise = calc_stats(data2)
mnfr = mnf(signal, noise)

But get the following error

mnfr = mnf(signal, noise)

LinAlgError Traceback (most recent call last)
in ()
----> 1 mnfr = mnf(signal, noise)

/usr/local/lib/python2.7/dist-packages/spectral/algorithms/algorithms.pyc in mnf(signal, noise)
1644 from spectral.algorithms.algorithms import PrincipalComponents, GaussianStats
1645 C = noise.sqrt_inv_cov.dot(signal.cov).dot(noise.sqrt_inv_cov)
-> 1646 (L, V) = np.linalg.eig(C)
1647 # numpy says eigenvalues may not be sorted so we'll sort them, if needed.
1648 if not np.alltrue(np.diff(L) <= 0):

/usr/lib/python2.7/dist-packages/numpy/linalg/linalg.pyc in eig(a)
1016 _assertRank2(a)
1017 _assertSquareness(a)
-> 1018 _assertFinite(a)
1019 a, t, result_t = _convertarray(a) # convert to double or cdouble type
1020 a = _to_native_byte_order(a)

/usr/lib/python2.7/dist-packages/numpy/linalg/linalg.pyc in _assertFinite(*arrays)
163 for a in arrays:
164 if not (isfinite(a).all()):
--> 165 raise LinAlgError("Array must not contain infs or NaNs")
166
167 def _assertNonEmpty(*arrays):

LinAlgError: Array must not contain infs or NaNs

I've made sure there are no infs or NaNs in my input data,
it looks like a problem in the eigenanalysis.
What can I do?

NaN values in kmeans (1 cluster, 0 iterations)

Hi,
I'm trying to perform a kmeans classification on a 8 band Envi file.
I can load my image in IPython with spectral, but when i try to perform the kmeans classification, it ends with this output :

In [26]: import spectral.io.envi as envi
In [27]: img = envi.open('spy.hdr','spy.envi').load()

In [28]: print img.info()
    # Rows:           2114
    # Samples:        2264
    # Bands:             8
    Data format:   float32

In [29]: (m, c) = kmeans(img, 5, 30)
Initializing clusters along diagonal of N-dimensional bounding box.
Iteration 1...  0.0%
kmeans terminated with
1
clusters after
0
iterations.
  • gdalinfo on this image gave me :
epinux@Debian-70-wheezy-64-minimal:~$ gdalinfo /var/www/shared/seabbass/spy.envi 
Driver: ENVI/ENVI .hdr Labelled
Files: /var/www/shared/seabbass/spy.envi
       /var/www/shared/seabbass/spy.hdr
Size is 2264, 2114
Coordinate System is:
PROJCS["NAD_1983_UTM_Zone_18N",
    GEOGCS["GCS_North_American_1983",
        DATUM["North_American_Datum_1983",
            SPHEROID["GRS_1980",6378137,298.257222101]],
        PRIMEM["Greenwich",0],
        UNIT["Degree",0.017453292519943295]],
    PROJECTION["Transverse_Mercator"],
    PARAMETER["latitude_of_origin",0],
    PARAMETER["central_meridian",-75],
    PARAMETER["scale_factor",0.9996],
    PARAMETER["false_easting",500000],
    PARAMETER["false_northing",0],
    UNIT["Meter",1]]
Origin = (506789.000000000000000,4204310.000000000000000)
Pixel Size = (0.500000000000000,-0.500000000000000)
Metadata:
  Band_1=Band 1
  Band_2=Band 2
  Band_3=Band 3
  Band_4=Band 4
  Band_5=Band 5
  Band_6=Band 6
  Band_7=Band 7
  Band_8=Band 8
Image Structure Metadata:
  INTERLEAVE=BAND
Corner Coordinates:
Upper Left  (  506789.000, 4204310.000) ( 74d55'21.68"W, 37d59'11.08"N)
Lower Left  (  506789.000, 4203253.000) ( 74d55'21.71"W, 37d58'36.78"N)
Upper Right (  507921.000, 4204310.000) ( 74d54'35.27"W, 37d59'11.04"N)
Lower Right (  507921.000, 4203253.000) ( 74d54'35.31"W, 37d58'36.75"N)
Center      (  507355.000, 4203781.500) ( 74d54'58.49"W, 37d58'53.91"N)
Band 1 Block=2264x1 Type=Float64, ColorInterp=Undefined
  Description = Band 1
Band 2 Block=2264x1 Type=Float64, ColorInterp=Undefined
  Description = Band 2
Band 3 Block=2264x1 Type=Float64, ColorInterp=Undefined
  Description = Band 3
Band 4 Block=2264x1 Type=Float64, ColorInterp=Undefined
  Description = Band 4
Band 5 Block=2264x1 Type=Float64, ColorInterp=Undefined
  Description = Band 5
Band 6 Block=2264x1 Type=Float64, ColorInterp=Undefined
  Description = Band 6
Band 7 Block=2264x1 Type=Float64, ColorInterp=Undefined
  Description = Band 7
Band 8 Block=2264x1 Type=Float64, ColorInterp=Undefined
  Description = Band 8

Am I doing something wrong ?
Thanks for any help!

ENVI header parsing issues

ENVI headers currently fail if there is a line with an empty key, for instance,
wavelength units =

Also, comments (lines starting with a semicolon) are currently not ignored.

I'll submit a pull request for read_envi_header() in a few minutes.

Keyword "offset" not handled properly in envi.create_image

When the offset keyword is passed to envi.create_image, it adds an "offset" parameter to the ENVI header file instead of "header offset", which is the correct parameter name. The effect of this bug is that the image data are always written from the beginning of the file and the file is not padded at the beginning, as expected. This does not affect the reading of image data, since the "header offset" is not set to a nonzero value (and the spectral module then reads data from the beginning of the file). However, if one were to open the image data file (not the ENVI header file) and write within the expected header area, it would corrupt image data, since the image data starts at the beginning of the file.

Pass numpy-nDarray directly to spectralpython

In my workflow the data are coming from GRASS - GIS, from which i'm exporting the raster layers as an Envi multiband dataset which i use as input to spectralpython.
In GRASS the same layers are available as numpy N-Darray without involve any export, how can i use spectralpython to read a numpy N-DArray as input instead of use the envi dataformat ?

Optional use_memmap argument to SpyFile read methods

Depending on an image file's band interleave, a memmap interface is not always faster than doing a direct read from the file. The various read_* methods of SpyFile subclasses should provide an optional use_memmap argument that can be set to False to avoid using the memmap. This argument should be defaulted to True or False based on performance for each read method of each of the SpyFile subclasses.

map_class_ids can produce incorrect mapping when source image has more classes than destination

If the source classification image contains more class labels than the target classification image, it is possible that the mapping will be incorrect when ground truth labels are ignored.

For example, suppose we have class maps A (source) and B (destination), where A contains more class labels than B. Suppose that class 15 from A is mapped to to class 3 in B. After all of the overlaps between the two images have been exhausted, if class 3 from A has not been mapped (there are no remaining pixels of commonality), then it will retain its class value in the remapped image. The result is that the remapped image will have class values of 3 that correspond to both classes 15 (correctly) and 3 (incorrectly) from the source image.

Can't open .hdf ( binary file ) from ASTER 1T

First, I would like to thank you for this great code!

My problem is that I can't open an ASTER image (.hdf) with the open_image function. Am missing something? Not supported? Do I need to process the file before?

Thanks

EOFError: read() didn't return enough bytes in read_band or read_bands method

Hi,

maybe half year ago I wrote a short script for me which extracts bands from hyperspectral data. It is .pix file with .hdr file in ENVI format. I am sure that it worked when I finished the script. Now I have problem with read_band (or read_bands) method. When I call this method I get this error and I don't know why. I tried more images, but every image (even older images) get the same error.

I tried it on python 2.7 and 3.5. Version of spectral is 0.18 (installed via pip) and version of numpy 1.11.0. All test have passed.

I am totally desperate and I will be glad for any help. I don't know if it is a bug (when something was updated) or it is my fault.

SPy part of my code:

img = envi.open(filename + '.hdr', filename)
reader = envi.read_envi_header(filename + '.hdr')
md = {'lines': lines,
          'samples': samples,
          'bands': len(listOfBands),
          'data type': 12}
img_save = envi.create_image('new_image.hdr', md, force=True, ext="dat", interleave=interleave)
mm = img_save.open_memmap(writable=True)

for band in listOfBands:   #list of integers
    	#print(img[:, :, 1])
    	#print(img.read_band(1, use_memmap=False))
    	mm[:, :, j] = img.read_band(band-1, use_memmap=True)
    	j += 1
Exception in Tkinter callback
Traceback (most recent call last):
  File "/usr/lib/python3.5/tkinter/__init__.py", line 1553, in __call__
    return self.func(*args)
  File "select3.py", line 69, in extractBands
    mm[:, :, j] = img.read_band(band-1, use_memmap=True)
  File "/usr/local/lib/python3.5/dist-packages/spectral/io/bipfile.py", line 111, in read_band
    vals.fromfile(f, sample_size)
EOFError: read() didn't return enough bytes


Too few colors on cube sides in view_cube

Only a few, discrete colors are displayed on the side of the cube now. This is due to commit d54d13f , which changed the default ColorScale to only use the given colors (i.e., no interpolation). This can be fixed by creating a new color scale in HypercubeWindow.load_textures that uses 256 colors, which is what the previous default color scale used to provide.

Support for python 3.x

SPy is currently developed for python 2.{6,7}. This enhancement will add support for python3. The current plan is to have a single code base that works for both python 2 and 3 (I realize some people discourage this).

The basic tasks to get this done are

  • Run 2to3 on the entire module.

  • Add needed future imports for python2:

    from __future__ import division, print_function, unicode_literals
    
  • Fix all the miscellaneous things that are broken (most likely due to [non]integer division).

  • Verify all unit tests pass under python 2 and 3.

  • Figure out a way to get view_cube and view_nd working with python3.

The last item above is probably the hardest part because those functions rely on wxPython, which is not ported to python3. I've confirmed that most of the needed matplotlib functions that are used for interactive GUI capabilities in SPy will work under alternate matplotlib backends (with python3) but the two functions mentioned will need to be migrated to another GUI toolkit that provides an OpenGL wrapper (unless wxPython gets ported). If there isn't a relatively easy way to port those two functions, it is probably still worth making the rest of the module work with python3 and deferring those two functions until later.

envi.save_image() fails when writing a single band numpy array imported by SPy_obj.read_band()

Version 0.16.1:
I imported a single band like this:

import spectral
Obj      = spectral.open_image('image.hdr')
Band1 = Obj.read_band(0)

When I try to save the imported 2D numpy array "Band1 using envi.save_saveimage with interleave ='bsq', I get the following error:

File "...\Python27\lib\site-packages\spectral\io\envi.py", line 537, in _prepared_data_and_metadata
  data = data.transpose(interleave_transpose(src_interleave, interleave))

ValueError: axes don't match array

The problem is that read_band() returns a 2D, not a 3D numpy array, which cannot be transposed with the argument (2,0,1), which is returned by spectral.io.spyfile.interleave_transpose('bip','bsq').
In my case I try to do this:

data = np_array_of_shape_10_10.transpose(2,0,1)
ValueError: axes don't match array

Solution:
The 2D numpy array returned by Obj.read_band(0) has to be converted into a 3D array like this:
Band1 = Obj.read_band(0)
Band1 = Band1[ : , : , numpy.newaxis]

Afterwards it can be saved by envi.save_image
@tboggs
Would it be possible modify the save_image function to be compatible with 2D numpy arrays?

ImportError: No module named oldnumeric

I use an Anaconda install and after upgrading to numpy 1.9.0 my install of spectral 1.5.0 throws an error on import. See below. Also this note would explain it :) http://docs.scipy.org/doc/numpy-dev/reference/routines.oldnumeric.html

File "C:/python/calculate.py", line 15, in
import spectral

File "C:\Anaconda\lib\site-packages\spectral__init__.py", line 48, in
from graphics import *

File "C:\Anaconda\lib\site-packages\spectral\graphics__init__.py", line 35, in
from colorscale import ColorScale

File "C:\Anaconda\lib\site-packages\spectral\graphics\colorscale.py", line 139, in
default_color_scale = create_default_color_scale()

File "C:\Anaconda\lib\site-packages\spectral\graphics\colorscale.py", line 128, in create_default_color_scale
from numpy.oldnumeric import array

ImportError: No module named oldnumeric

Display of pixel row/col by clicking in ND-window not functioning properly

When clicking on a pixel with CTRL+SHIFT in the ND-window display (after calling view_nd), the following exception is raised:

Traceback (most recent call last):
  File "/home/thomas/src/spectral/spectral/graphics/ndwindow.py", line 93, in left_down
    self.window.canvas.SetCurrent(self.canvas.context)
AttributeError: MouseHandler instance has no attribute 'canvas'

This is due not accessing the window canvas properly (should be self.window.canvas).

Furthermore, the pixel coordinate returned incorrectly has a floating point row value (should be int):

Pixel 10264 (70.786206896551718, 114) has class 0.

This was likely introduced when the file began using true division (by importing division from __future__).

Support for AVIRIS-NG Data

Hi Folks, we are currently working with AVIRIS-NG data and would really like to use spectral if possible. Right now when I load data I am having issues as outlined below. You can access the L1B and L2 data products here, ftp://avng.jpl.nasa.gov/AVNG_2015_data_distribution/.
Can someone else confirm that they cannot load AVIRIS-NG products?

>>> import spectral as spy
>>> image = spy.open_image('ang20150422t163638_rdn_v1e_img')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/lmcgibbn/miniconda3/lib/python3.5/site-packages/spectral/spectral.py", line 490, in open_image
    raise IOError('Unable to determine file type or type not supported.')
OSError: Unable to determine file type or type not supported.
>>> exit()

If this is the case then we would like to write the functionality to work with AVIRIS-NG images and we will submit a pull request for this.

view_cube fails in HypercubeWindow.load_textures

The error occurs when calling make_pil_image because an unused "format" keyword is given. The keyword is passed on to get_rgb, which previously ignored unused keywords but now checks keyword arguments and raises an exception when an unexpected keyword is given.

spylab.py throws AttributeError: 'dict' object has no attribute 'has_key'

Using the Python3 branch, and running the following code, I get the above Error.

from spectral import *
img = spectral.open_image('avng.jpl.nasa.gov/AVNG_2015_data_distribution/L2/ang20150422t163638_rfl_v1e/ang20150422t163638_corr_v1e_img.hdr')
print(img)
# Data Source:   '././ang20150422t163638_corr_v1e_img'
#   # Rows:           4031
#   # Samples:         899
#   # Bands:           432
#   Interleave:        BIL
#   Quantization:  32 bits
#   Data format:   float32
view = imshow(img, (29, 19, 9))
print(view)
...
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-8-680f59826a83> in <module>()
----> 1 print(view)

/Users/lmcgibbn/miniconda3/lib/python3.5/site-packages/spectral-0.18-py3.5.egg/spectral/graphics/spypylab.py in __str__(self)
   1127             interp = self.interpolation
   1128         s += '  {0:<20}:  {1}\n'.format("Interpolation", interp)
-> 1129         if meta.has_key('rgb range'):
   1130             s += '  {0:<20}:\n'.format("RGB data limits")
   1131             for (c, r) in zip('RGB', meta['rgb range']):

AttributeError: 'dict' object has no attribute 'has_key'

This is due to has_key being removed in Python 3 see https://docs.python.org/3.1/whatsnew/3.0.html#builtins

Problem in the MSAM

I am want to do the msam like below.

from spectral import *
import gdal
d=ev.load_ENVI_spec_lib("D:\data\speclib.hdr")
img=gdal.Open("D:\data\sub_66")
M=img.ReadAsArray()
M=np.swapaxes(M,0,2)
n1=d[0]
n2=d[1]
n1=np.transpose(n1)
n1=n1[:425]
wavelength=n2['wavelength']
wavelength=np.float64(wavelength)
class1=msam(M,n1)

But it gives error:
AssertionError: Matrix dimensions are not aligned.
where M=>(210 row,275 column,425 band) shape and n1=(425 bandreflectance,16 spectra)

Bug reading ENVI SLI files using spectral.io.envi.open()

I'm using python 2.7.6, and discovered what I think is a bug reading ENVI SLI (spectral library) files. Every time I try to read SLI files it bombs on me, I think because envi.open() is mistakenly accessing the .sli (data) file when it tried to read header data, rather than accessing the .hdr file (header).

A single-line addition(shown below) at the beginning of the spectral.io.envi.open() function should be sufficient to fix this problem.

import os
   from .spyfile import find_file_path
   import numpy
   import spectral

   headerPath = find_file_path(file)                   (This line should be removed)
   headerPath = os.path.splitext(file)[0]+'.hdr'   (this new line should replace the line above)
   h = read_envi_header(headerPath)

Incorrect handling of ext keyword in envi.save_image.

An error is raised if ext='' is passed to the envi.save_image function. Furthermore, using an alternate image file extension was resulting in the name separator(.) being placed after the file extension instead of before.

format keyword causes error in save_rgb

This error is due to the fact that keywords are passed on to get_rgb, which now checks for allowable keywords (and "format" is not one of them). Need to remove "format" from the keywords dictionary prior to calling get_rgb.

Incorrect handling of unsigned byte data type in ENVI files

ENVI only supports an unsigned 8-bit data type so in the dtype_map variable in envi.py, ('1', np.int8) should actually be('1', np.uint8). Need to also figure out how to map data types in the other direction (i.e., how envi.save_image should handle an np.int data type, which isn't supported by the ENVI format).

ENVI data types

I believe ENVI data type 1 ('byte') should be of type np.uint8 instead of np.int8.

It's not the end of the world but it's very easy to fix.

memmap issue

Hello,

I am new the spectral python, but really am psyched about using it. I am having a problem with the open_memmap function. I ran the python -m spectral.tests.run script and got the following error. I am using the newest version of numpy (1.9.2)

Any help would be great!

------------------------------------------------------------------------
Running memmap tests.
------------------------------------------------------------------------
Testing memmaps with BIL image file.
Saving /media/sf_C_DRIVE/Courses/remote_sensing/project/testng_SPy/spectral_test_files/memmap_test_bil.img
Testing bil_memmap_read..................................... OK
Testing bil_memmap_write.................................... Traceback (most recent call last):
 File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
 File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/usr/local/lib/python2.7/dist-packages/spectral/tests/run.py", line 68, in <module>
test.run()
File "/usr/local/lib/python2.7/dist-packages/spectral/tests/memmap.py", line 192, in run
suite.run()
File "/usr/local/lib/python2.7/dist-packages/spectral/tests/memmap.py", line 186, in run
test.run()
File "/usr/local/lib/python2.7/dist-packages/spectral/tests/spytest.py", line 76, in run
method()
File "/usr/local/lib/python2.7/dist-packages/spectral/tests/memmap.py", line 133, in test_bil_memmap_write
mm[i, k, j] = 3 * self.value
TypeError: 'NoneType' object does not support item assignment

SpyFile indexing result doesn't exactly match numpy indexing

When indexing a numpy array, if a single index is specified along an axis (instead of a slice), then numpy removes the length-1 entries from the shape of the result. The __getitem__ function in SpyFile and some of interleave-specific routines (read_subregion, read_subimage) do not currently follow this convention.

Here are some examples that show the inconsistencies.

In [141]: spy.shape # spy is a SpyFile object (a BsqFile)
Out[141]: (640, 160, 6)

In [142]: arr = spy[:,:,:]; arr.shape # arr is a numpy array with the SpyFile data
Out[142]: (640, 160, 6)

In [143]: spy[:, :, 0].shape
Out[143]: (640, 160, 1)

In [144]: arr[:, :, 0].shape
Out[144]: (640, 160)

In [145]: arr[:, :, 0:1].shape                                                        
Out[145]: (640, 160, 1)

In [146]: spy.read_band(0).shape
Out[146]: (640, 160)

GaussianClassifier fails with TransformedImage object

When a GaussianClassifier is created and the classify_image method is called on a TransformedImage object, and AttributeError is raised because the object does not have a reshape method (classify_image is expecting a numpy.ndarray object). The exception can be reproduced as follows:

>>> from spectral import *
>>> img = open_image('92AV3C.lan')
>>> gt = open_image('92AV3GT.GIS').read_band(0)
>>> data = img.load()
>>> pc = principal_components(data)
>>> pc_0999 = pc.reduce(fraction=0.999)
>>> img_pc = pc_0999.transform(img)
>>> classes = create_training_classes(img_pc, gt)
>>> gmlc = GaussianClassifier(classes)
>>> clmap = gmlc.classify_image(img_pc)
AttributeError                            Traceback (most recent call last)
<ipython-input-11-6488a72f9e73> in <module>()
----> 1 clmap = gmlc.classify_image(img_pc)
/home/thomas/src/spectral/spectral/algorithms/classifiers.pyc in classify_image(self, image)
    200         status.display_percentage('Processing...')
    201         shape = image.shape
--> 202         image = image.reshape(-1, shape[-1])
    203         scores = np.empty((image.shape[0], len(self.classes)), np.float64)
    204         delta = np.empty_like(image, dtype=np.float64)

AttributeError: 'TransformedImage' object has no attribute 'reshape'

This will also occur with the MahalanobisDistanceClassifier. The bug can be avoided by using the load method of the TransformedImage object (img_pc in the example above) to load the data into an ndarray (if sufficient memory is available). This bug will be fixed by using the parent class method Classifier.classify_image when the argument is not an ndarray.

view_cube import error

I have installed wxPython on my win10 x64 correctly ,but when I use spectral.view_cube(img, bands=[0, 1, 2])

Traceback (most recent call last):
File "N:\new.py", line 7, in
spectral.view_cube(img, bands=[0, 1, 2])
File "C:\WinPython-32bit-2.7.10.2\python-2.7.10\lib\site-packages\spectral\graphics\graphics.py", line 178, in view_cube
from spectral.graphics.hypercube import HypercubeWindow
File "C:\WinPython-32bit-2.7.10.2\python-2.7.10\lib\site-packages\spectral\graphics\hypercube.py", line 73, in
raise ImportError("Required dependency wx.glcanvas not present")
ImportError: Required dependency wx.glcanvas not present

Missing numpy module name

For version 0.16.0:

In algorithms.py, line 97, the 'numpy' module name is missing from the call to not_equal:

File "F:\conda\lib\site-packages\spectral\algorithms\algorithms.py", line 97, in init
self.mask = not_equal(mask, 0)

NameError: global name 'not_equal' is not defined

How to use a spectral library to train a classifier

Hello, I am currently working on a project where I want to classify AVIRIS images using a spectral library (USGS right now and later ASTER). Opening the USGS .hdr file returns a SpectralLibrary object, but I cannot use this to create training classes using the "create_training_classes" function as there is no "shape" attribute for the SpectralLibrary class. Is there a way to make this work? Thank you for your time.

case sensitivity of keys in ENVI header files

Although it appears to be undocumented, ENVI treats keys in header files as case insensitive. Some data creators are assuming the behavior and their .hdr files look like this

ENVI 
SAMPLES = 160
LINES = 640
...

instead of this

ENVI 
samples = 160
lines = 640
...

This causes problems for mandatory keywords, checked in spectral/io/envi.py around line 737

    # Verify minimal set of parameters have been provided
    if 'lines' not in metadata:
        raise Exception('Number of image rows is not defined.')
    ...

What do you think is the best way of dealing with this? Some possibilities:

  1. Apply .lower() to all keywords in ENVI headers. (easy: envi.py line 115)
  2. Apply .lower() only to mandatory keywords in ENVI headers (lines, samples, bands, etc.) (personally not a fan of this option).
  3. Optional case_insensitive keyword argument. Probably has to be added to read_envi_header(), open() and gen_params() in envi.py, and possibly to open_image() in spectral.py.

Or maybe something better that I'm not thinking of. Let me know what you think and if you'd like me to submit a pull request.

Importing spectral breaks matplotlib's plotting abilities

This functions as expected, popping up a graphical plot:

import numpy as np
import matplotlib.pyplot as plt

A = np.array([
    [1, 2, 3],
    [4, 5, 6],
    [7, 8, 9]
])
plt.imshow(A, interpolation='nearest')
plt.show()

But this suppresses the graphical plot--it never shows up at all.

import spectral as spc      # <-- Only change is the addition of this line
import numpy as np
import matplotlib.pyplot as plt

A = np.array([
    [1, 2, 3],
    [4, 5, 6],
    [7, 8, 9]
])
plt.imshow(A, interpolation='nearest')
plt.show()

The suppressing behavior is consistent regardless of the import order.

Expected behavior is that importing spectral should have no bearing on the proper functioning of matplotlib's plots.

Working environment is on a mac, El Capitan, invoking from command line as python example.py from iterm2.

Add option for forcing lower case in key names when reading in ENVI header

First of all, big thanks for providing an excellent spectral python module!

In our use we are acquiring images with some custom metadata and then storing them in ENVI BSQ format. The metadata contains a lot of fields that are non-standard for ENVI format such as exposure and synchronization timing details. We are writing the files using spectral.envi.save_image() method and then later in post processing phase reading them back in using spectral.envi.open() and spectral.envi.read_envi_header() methods. Everything gets written perfectly fine, but when reading the header data back in the envi.read_envi_header()-method forces all the key names to lower case. I understand why this has been implemented, but in our user case it causes issues as the field names in MetaDataDictIn do not match perfectly those originally in MetaDataDictOut.

It would be an easy to give user option if the lower case is applied or not. Currently the function contains lines:

def read_envi_header(file):
...
            key = key.strip().lower()
...

I propose these lines to be replaced with:

def read_envi_header(file, lowercasekeys=True):
...
            key = key.strip()
            if lowercasekeys:
                key = key.lower()
...

...then the user would have an option to disable the lower case if needed, and it would have no effect for the users who are happy with lower case formatting.

'classifiers.pyc' line 202 error = "object has no attribute 'reshape' "

image.reshape in python 2.7 error = "object has no attribute 'reshape' "

Hello Everyone,

Great library BTW. Thanks to Thomas Boggs!

I convert my spectral data from matlab using enviwrite

info = enviinfo(myXYL_M);
enviwrite(myXYL_M,info,'my_hypercube.dat');

I read it into python (2.7, I use Enthought Canopy_64bit) with

import numpy as np
from spectral import *
import spectral.io.envi as envi

# # tried with this to solve issue, didn't work
# import matplotlib
# from numpy import *
# from pylab import *
# from PIL import Image 

# IMPORT HYPERCUBE
img = envi.open('my_hypercube.dat.hdr')

# load classes from file
myclasses = np.loadtxt('myclasses.txt', delimiter=",") #seeding classes


# SHOW CLASSES IN OVERLAY WITH IMG
# view = imshow(img, (1400, 700, 450), classes=myclasses) #nice for 1920 WLs
view = imshow(img, (24, 12, 7), classes=myclasses)  #nice for 33 WLs
view.set_display_mode('overlay')
view.class_alpha = 0.5

This is my issue:

Whenever I try to apply a classifier, I run into this error:

clmap = gmlc.classify_image(img)
Processing...  0.0%---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-19-52f9d70ae993> in <module>()
----> 1 clmap = gmlc.classify_image(img)

/python2.7/site-packages/spectral/algorithms/classifiers.pyc in classify_image(self, image)
    200         status.display_percentage('Processing...')
    201         shape = image.shape
--> 202         image = image.reshape(-1, shape[-1])
    203         scores = np.empty((image.shape[0], len(self.classes)), np.float64)
    204         delta = np.empty_like(image, dtype=np.float64)

AttributeError: 'BsqFile' object has no attribute 'reshape' 

Again, after trying Fisher Linear Discriminant:

fld = linear_discriminant(classes)
len(fld.eigenvectors)
Out[22]: 33

img_fld = fld.transform(img)

v = imshow(img_fld[:, :, :3])

classes.transform(fld.transform)

gmlc = GaussianClassifier(classes)
Setting min samples to 4

clmap = gmlc.classify_image(img_fld)
Processing...  0.0%---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-27-77f389619efe> in <module>()
----> 1 clmap = gmlc.classify_image(img_fld)

python2.7/site-packages/spectral/algorithms/classifiers.pyc in classify_image(self, image)
    200         status.display_percentage('Processing...')
    201         shape = image.shape
--> 202         image = image.reshape(-1, shape[-1])
    203         scores = np.empty((image.shape[0], len(self.classes)), np.float64)
    204         delta = np.empty_like(image, dtype=np.float64)

AttributeError: 'TransformedImage' object has no attribute 'reshape' 

ImageView.__str__ fails when viewing class labels without image data

ImageView.__str__ prints the bands element of the RGB metadata; however, this metadata element is not present if class labels are being viewed without an associated image data array. For example:

>>> v = spy.imshow(classes=gt)
>>> v
/home/thomas/src/spectral/spectral/graphics/spypylab.pyc in __str__(self)
   1117         meta = self.data_rgb_meta
   1118         s = 'ImageView object:\n'
-> 1119         s += '  {0:<20}:  {1}\n'.format("Display bands", meta['bands'])
   1120         if self.interpolation == None:
   1121             interp = "<default>"

KeyError: u'bands'

mnf documentation examples use invalid call signature

In the following lines of the spectral.mnf doc string, data should be added as the first argument:

>>> denoised = mnfr.denoise(snr=10)

>>> # Reduce dimensionality, retaining NAPC components where SNR >= 10.
>>> reduced = mnfr.reduce(snr=10)

>>> # Reduce dimensionality, retaining top 50 NAPC components.
>>> reduced = mnfr.reduce(num=50)

Problem in classification.

from spectral import *
#M is the image dataset(ndarray of hsi cube of shaoe(210,370L,425L))
#n2 is the spectral library(16 spectra) of shape(16L,425L)425 bands.
cla=spectral_angles(M,n2)
clmap=np.argmin(cla,2)
classes=(clmap+1)
v=spy.imshow(classes=(clmap+1))

but it does not show unclassified region.It classified all the image(misclassification).
Produce the following result.Also I don't know how to set threshold?
c

Problem in opening HSI file

I am trying to open hsi file, but it is not working. It keeps saying, "Unable to determine file type"
I am using the command as always: x1 = open_image(filename). I am attaching the file here. This is actually HSI file, but because I cannot attach HSI file here, I just changed the extension to png.

image006

install error

I have the following error when installing using pip (on manjaro linux (archbased)).

Downloading/unpacking spectral
  Downloading spectral-0.15.0.tar.gz (145kB): 145kB downloaded
  Running setup.py (path:/tmp/pip_build_root/spectral/setup.py) egg_info for package spectral
    Traceback (most recent call last):
      File "<string>", line 17, in <module>
      File "/tmp/pip_build_root/spectral/setup.py", line 8, in <module>
        import spectral
      File "/tmp/pip_build_root/spectral/spectral/__init__.py", line 44, in <module>
        from spectral import (open_image, load_training_sets, save_training_sets,
    ImportError: cannot import name 'open_image'
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):

  File "<string>", line 17, in <module>

  File "/tmp/pip_build_root/spectral/setup.py", line 8, in <module>

    import spectral

  File "/tmp/pip_build_root/spectral/spectral/__init__.py", line 44, in <module>

    from spectral import (open_image, load_training_sets, save_training_sets,

ImportError: cannot import name 'open_image'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.