Giter VIP home page Giter VIP logo

sarpy's Introduction

SarPy

SarPy is a basic Python library to read, write, and do simple processing of complex SAR data using the NGA SICD format (standards linked below). It has been released by NGA to encourage the use of SAR data standards throughout the international SAR community. SarPy complements the SIX library (C++) and the MATLAB SAR Toolbox, which are implemented in other languages but have similar goals.

Some sample SICD files can be found here.

Relevant Standards Documents

A variety of SAR format standard are mentioned throughout this ReadMe, here are associated references.

Sensor Independent Complex Data (SICD) - latest version (1.3.0; 2021-11-30)

  1. Volume 1, Design & Implementation Description Document
  2. Volume 2, File Format Description Document
  3. Volume 3, Image Projections Description Document
  4. Schema

Sensor Independent Derived Data (SIDD) - latest version (3.0; 2021-11-30)

  1. Volume 1, Design and Implementation Description Document
  2. Volume 2, NITF File Format Description Document
  3. Volume 3, GeoTIFF File Format Description Document
  4. Schema

Compensated Phase History Data (CPHD) - latest version (1.1.0; 2021-11-30)

  1. Design & Implementation Description
  2. Design & Implementation Schema

Both SICD and SIDD files are NITF files following specific guidelines Basic Image Interchange Format (BIFF) - latest edition (2021.2; 2021-04-20)

  1. National Imagery Transmission Format

For other NGA standards inquiries, the standards registry can be searched

here.

Basic Capability

The basic capabilities provided in SarPy is generally SAR specific, and largely geared towards reading and manipulating data provided in NGA SAR file formats. Full support for reading and writing SICD, SIDD, CPHD, and CRSD (standard pending) and associated metadata structures is currently provided, and this is the main focus of this project.

There is additionally support for reading data from complex data formats analogous to SICD format, usually called Single Look Complex (SLC) or Level 1, from a variety of commercial or other sources including

  • Capella (partial support)
  • COSMO-SkyMed (1st and 2nd generation)
  • GFF (Sandia format)
  • ICEYE
  • NISAR
  • PALSAR2
  • RadarSat-2
  • Radar Constellation Mission (RCM)
  • Sentinel-1
  • TerraSAR-X.

For this SLC format data, it is read directly as though it were coming from a SICD file. This ability to read does not generally apply to data products other than the SLC or Level 1 product, and there is typically no direct NGA standard analog for these products.

Some general TIFF and NITF reading support is provided, but this is not the main goal of the SarPy library.

Documentation

Documentation for the project is available at readthedocs.

If this documentation is inaccessible, it can be built locally after checking out this repository using sphinx via the command python setup.py build_sphinx. This depends on python package sphinx.

Origins

SarPy was developed at the National Geospatial-Intelligence Agency (NGA). The software use, modification, and distribution rights are stipulated within the MIT license.

Dependencies

The core library functionality depends only on numpy >= 1.11.0 and scipy.

Optional Dependencies and Behavior

There are a small collection of dependencies representing functionality which may not be core requirements for much of the sarpy targeted tasks. The tension between requiring the least extensive list of dependencies possible for core functionality and not having surprise unstated dependencies which caused unexpected failures is evident here. It is evident that there are many viable arguments for making any or all of these formally stated dependencies. The choices made here are guided by practical realities versus what is generally considered best practices.

For all packages on this list, the import is tried (where relevant), and any import errors for these optional dependencies are caught and handled. In other words, a missing optional dependency will not be presented as import time. Excepting the functionality requiring h5py, this import error handling is probably silent.

Every module in sarpy can be successfully imported, provided that numpy and scipy are in the environment. Attempts at using functionality depending on a missing optional dependency will generate an error at run time with accompanying message indicating the missing optional dependency.

  • Support for reading single look complex data from certain sources which provide data in hdf5 format require the h5py package, this includes Cosmo-Skymed, ICEYE, and NISAR data.

  • Reading an image segment in a NITF file using jpeg or jpeg 2000 compression and/or writing a kmz image overlay requires the pillow package.

  • CPHD consistency checks, presented in the sarpy.consistency module, depend on lxml>=4.1.1, networkx>=2.5, shapely>=1.6.4, and pytest>=3.3.2. Note that these are the versions tested for compliance.

  • Some less commonly used (in the sarpy realm) NITF functionality requires the use and interpretation of UTM coordinates, and this requires the pyproj package.

  • Building sphinx documentation (mentioned below) requires packages sphinx, and sphinx_gallery.

  • Optional portions of running unit tests (unlikely to be of relevance to anyone not performing development on the core sarpy package itself) require the lxml package

Installation

From PyPI, install using pip (may require escalated privileges e.g. sudo):

pip install sarpy

Note that here pip represents the pip utility for the desired Python environment.

For verbose instructions for installing from source, see here. It is recommended that still the package is built locally and installed using pip, which allows a proper package update mechanism, while python setup.py install does not.

Issues and Bugs

Support for Python 2 has been dropped. The core sarpy functionality has been tested for Python 3.6, 3.7, 3.8, 3.9, 3.10, and 3.11.

Changes to sarpy for the sole purpose of supporting a Python version beyond end-of-life are unlikely to be considered.

Information regarding any discovered bugs would be greatly appreciated, so please feel free to create a GitHub issue. If more appropriate, contact [email protected].

Integration Branches

Integration branches (branches prefixed with integration/) are used to stage content under consideration for inclusion in the master branch and future SarPy releases. These branches can be used to access features and bug fixes that have not been fully released.

Pull Requests

Efforts at direct contribution to the project are certainly welcome, and please feel free to make a pull request. Pull requests should be authored against the master branch but may be retargeted to a suitable integration branch upon review. Note that any and all contributions to this project will be released under the MIT license.

Software source code previously released under an open source license and then modified by NGA staff is considered a "joint work" (see 17 USC 101); it is partially copyrighted, partially public domain, and as a whole is protected by the copyrights of the non-government authors and must be released according to the terms of the original open source license.

Associated GUI Capabilities

Some associated SAR specific graphical user interface tools are maintained in the sarpy_apps project.

sarpy's People

Contributors

ayoungs avatar bombaci-vsc avatar ckras34 avatar darder avatar jcasey-beamio avatar johnstob avatar khavernathy avatar kjurka avatar mattmolinare avatar mnpark3y avatar mstewart-vsc avatar mtrachy avatar orbitaljarred avatar patrickcutlertdy avatar pjc0308 avatar pressler-vsc avatar rakibfiha avatar thomasmccullough avatar utwade avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sarpy's Issues

Update tests with URLs of missing test data

Right now just cloning the repo and 'setup.py test' fails all over the place. The error messages make clear where the test data is expected to be found, but not where a user can download it from in order to put it there.

The link in the README about the sample ntf is helpful, but not complete. I also had to hunt on my own to find required test data at

and I still can't find data for the Sentinel and RS2 tests.

It would be good if each test failure message would say like "The data needed for this test can be downloaded from http://..."

Even better, how about if there were a 'setup.py download' that automatically downloaded and organized all the required test data? (with a confirmation like 'This will download X gigabytes of test data into ...\Desktop\sarpy_testing, is that OK? [Y/n]' )

SICDReader to_xml_string() changing SICD element to SICDType

I need to extract the SICD XML string from a SICD file. I will also be using the pixels, so want to use the SICDReader to do it all. I assume I am using it correctly - I am effectively doing:
sicdXml = SICDReader("testImage.ntf").sicd_meta.to_xml_string().
The resulting string gave me an error in my proprietary code. Before attempting to use sarpy, I was using a proprietary sicd string reader. Upon examination, I noticed that the the top-level tag name had been changed from "SICD" to "SICDType" in the string returned by sarpy. Is this intentional? It would be convenient if to_xml_string() returned the XML string as it appears in the file.
Making a cursory glance at the rest of the returned string, I do not see any other element name substitutions.

h5py dependency not listed in setup.py

After installing sarpy into a bare virtualenv, trying to open a SICD file using sarpy.io.complex.open(fname) fails with:

reader = sarpy.io.complex.open(input_file)
File "xxx/venv/lib/python3.6/site-packages/sarpy/io/complex/init.py", line 70, in open
modules = [sys.modules[names] for names in module_names if import(names)]
File "xxx/venv/lib/python3.6/site-packages/sarpy/io/complex/init.py", line 70, in
modules = [sys.modules[names] for names in module_names if import(names)]
File "xxx/venv/lib/python3.6/site-packages/sarpy/io/complex/csm.py", line 20, in
import h5py
ModuleNotFoundError: No module named 'h5py'

So csm wants h5py and even though I'm trying to read a SICD, this is tripping me up. Reasonable solutions are:

  1. Just list h5py as a required dependency.
  2. try/catch around reader module setup and ignore readers that we don't have dependencies for.
  3. For CSM only import h5py after checking that the input is a H5 file by examining it and checking its magic number.

Incorrect CPHD PVP byte indexing

There is an issue with calculating the correct byte indexing for each CPHD PVP field during reads. The issues can be found in io/phase_history/cphd.py line 381.

Currently field_offset is calculated as the PVP offset as an 8 byte offset value; e.g. field_offset = 1 for the second PVP value in the vector. This issue can be fixed by multiplying line 381 by 8 for the correct byte offset.

current_offset = pvp_block_offset + pvp_offset + the_range[0]*vector_size + field_offset * 8

SICD 1.1 written with 'check_older_version' option is invalid

Using the following setup setting the third argument to 'True' should produce a 1.1 SICD according to the documentation, but it fails to change the NITF subheader fields 'FTITLE' and 'IID2' to comply with the 1.1 specification as follows:
SarPy SICDWriter setup:

 x = SICDWriter('newSicd.nitf',meta,True,False)

According to the 1.1 spec, FTITLE and IID2 should be prefixed with "SICD:"

NGA STANDARDIZATION DOCUMENT SENSOR INDEPENDENT COMPLEX DATA (SICD) Volume 2
File Format Description Document Specification of the placement of SICD data products in the allowed image file formats.
(2014-09-30) Version 1.1

From Table 3.2:
1 1SICD
From Table 3.4:

1 1bSICD

These prefixes are removed with SICD 1.2, but 1.1 must have them to follow spec.

Improve documentation:

  • We need a good naming scheme.
  • These should be incorporated smoothly into the sphinx documentation.
  • We need to get the sphinx documentation published - this is already a task...

Immediate documentation and example needs:
1 - basic data reading:
i.) - sarpy.io.complex basic complex data reading and associated tasks (presently sarpy_example.py)
ii.) - sarpy.io.product basic derived data reading and associated tasks
iii.) - sarpy.io.phase_history basic phase history data reading and associated tasks

2 - sarpy.consistency usage - simple script and command line example for each of:
i.) sicd_consistency
ii.) sidd_consistency
iii.) cphd_consistency

3 - sarpy.geometry
i.) point_projection examples
ii.) geocoords basics?
iii.) geometry_elements? - (presently polygon_example.py)

4 - basic file creation/writing:
i.) sarpy.io.complex conversion from one complex format to sicd
ii.) sarpy.io.product creation of derived products
iii.) sarpy.io.phase_history example for writing a cphd file

5 - command line utils (sarpy.utils) - capability summary and basic usage overview for each of:
i.) convert_to_sicd (beef this utility up too?)
ii.) create_product
iii.) create_kmz (beef this utility up too?)
iv.) nitf_utils
v.) cphd_utils - pretty feeble right now, anything to beef this up with?
vi.) review_class -> review_classification (this one is trivial)
vii.) sicd_des_head - should almost certainly be junked

ground_to_image iters is int and can not resize

I am new to sarpy, but when I tried to project 2 points using ground_to_image it called _ground_to_image in the num_points < block_size section.

iters comes out of _ground_to_image as an int and that does not work in the resize command at the bottom of grount_to_image.

I was able to fix this locally by adding this line after the call to _ground_to_image

iters = numpy.full((num_points,), iters)

That will just replicate that value.

These won't always be exactly equal

srp_iac = np.dot([iax, iay, unit(np.cross(iax, iay))], srp - iarp)

srp comes from the PVP parameters which are double precision floating point.

iarp comes from the XML string representation which has limited significant digits.

There are a few solutions:

  • np.around(srp, digits=9)
  • print more significant digits in the XML

I lean towards the first one. If someone doesn't use sarpy to create the CPHD (as I did) this could still fail.

SIDD creation workflow

Refactoring color subaperture image creation, and using this as a SIDD creation test case.

Size of ENGDATA field in ENGRDA TRE is incorrect

Calculation of the size of the ENGDATA field in the ENGRDA TRE appears to be incorrect. I believe it depends on both ENGDATC, the count, as well as ENGDTS, the size. In other words, rather than this:

self.add_field('ENGDATA', 'b', self.ENGDATC, value)

I believe the calculation should be:

self.add_field('ENGDATA', 'b', self.ENGDATC * self.ENGDTS, value)

This fixed it for me locally. I can't find a spec for ENGRDA to refer to, but the Nitro ENGRDA source backs this up.

failure writing multi-segment SICDs

Trying to test some image handling code, I wanted to generate some fake SICDs with multiple NITF image segments. To do this I scraped the SICD metadata from an existing image and used it to writeout a new image with a different number of rows/cols. The metadata won't make sense, but this ought to work.

import numpy as np
import argparse

import sarpy.io.complex.sicd as sicd
import sarpy.io.complex as cf


parser = argparse.ArgumentParser(description='Fake SICD Generator')
parser.add_argument('InFile', help='Any SICD file to scrape metadata from')
parser.add_argument('OutFile', help='Output SICD')
parser.add_argument('Rows', type=int, help='Rows of output SICD')
parser.add_argument('Cols', type=int, help='Cols of output SICD')

args = parser.parse_args()

reader = cf.open(args.InFile)
meta = reader.sicd_meta

d1 = np.random.rand(args.Rows, args.Cols)
d2 = np.random.rand(args.Rows, args.Cols)
data = d1 + d2 * 1j
meta.ImageData.NumRows = args.Rows
meta.ImageData.NumCols = args.Cols
meta.ImageData.FullImage.NumRows = meta.ImageData.NumRows
meta.ImageData.FullImage.NumCols = meta.ImageData.NumCols
meta.ImageData.PixelType = 'RE32F_IM32F'

writer = sicd.SICDWriter(args.OutFile, meta)
writer.write_chip(data)
del writer
python make_multi_seg_image.py ~/sample-sicds/sicd_example_1_PFA_RE32F_IM32F_HH.nitf multi-seg.nitf 200100 2
WARNING:root:Got string input value of length 97 for attribute IID2 of class ImageSegmentHeader, which is longer than the allowed length 80, so truncating
WARNING:root:Got string input value of length 97 for attribute IID2 of class ImageSegmentHeader, which is longer than the allowed length 80, so truncating
WARNING:root:Got string input value of length 97 for attribute IID2 of class ImageSegmentHeader, which is longer than the allowed length 80, so truncating
WARNING:root:Got string input value of length 11 for attribute ILOC of class ImageSegmentHeader, which is longer than the allowed length 10, so truncating
WARNING:root:Got string input value of length 97 for attribute FTITLE of class NITFHeader, which is longer than the allowed length 80, so truncating
Traceback (most recent call last):
  File "make_multi_seg_image.py", line 29, in <module>
    writer.write_chip(data)
  File "/home/kjurka/git/kjurka/sarpy/sarpy/io/general/nitf.py", line 2515, in write_chip
    self.__call__(data, start_indices=start_indices, index=index)
  File "/home/kjurka/git/kjurka/sarpy/sarpy/io/general/nitf.py", line 2573, in __call__
    (this_inds[0], this_inds[2]))
  File "/home/kjurka/git/kjurka/sarpy/sarpy/io/general/base.py", line 1895, in __call__
    self._call(start1, stop1, start2, stop2, data_view)
  File "/home/kjurka/git/kjurka/sarpy/sarpy/io/general/base.py", line 1902, in _call
    self._memory_map[start1:stop1, start2:stop2] = data
ValueError: could not broadcast input array from shape (99999,2,2) into shape (0,2,2)
CRITICAL:root:Image segment 1 has only written 0 of 199998 pixels
CRITICAL:root:Image segment 2 has only written 0 of 204 pixels
Exception ignored in: <bound method AbstractWriter.__del__ of <sarpy.io.complex.sicd.SICDWriter object at 0x7f0bb1e595e8>>
Traceback (most recent call last):
  File "/home/kjurka/git/kjurka/sarpy/sarpy/io/general/base.py", line 1175, in __del__
  File "/home/kjurka/git/kjurka/sarpy/sarpy/io/general/nitf.py", line 2611, in close
OSError: The NITF file multi-seg.nitf image data is not fully written, and the file is potentially corrupt.
Image segment 1 has only written 0 of 199998 pixels
Image segment 2 has only written 0 of 204 pixels

Issue with OrthorectificationIterator when creating SIDD from inmemory SICD

I am attempting to create a SIDD from an inmemory SICD. Creating a SIDD from an inmemory SICD seems to not work due to an issue in OrthorectificationIterator

new_meta=meta.copy()
apod_shape=np.shape(tImage3a)
new_meta.ImageData.NumRows=apod_shape[0]
new_meta.ImageData.NumCols=apod_shape[1]
new_reader=FlatSICDReader(new_meta, tImage3a)

ortho_method=NearestNeighborMethod(new_reader, index=0, complex_valued=True, apply_radiometric=None, subtract_radiometric_noise=False)

create_detected_image_sidd(ortho_method, '.', block_size=10, version=2, remap_function=High_Contrast())

I get the following error:
File "/opt/miniconda3/lib/python3.7/site-packages/sarpy/io/product/sidd_product_creation.py", line 159, in create_detected_image_sidd
remap_function=remap_function, recalc_remap_globals=False)
File "/opt/miniconda3/lib/python3.7/site-packages/sarpy/processing/ortho_rectify.py", line 2294, in init
if os.path.abspath(ortho_helper.reader.file_name) !=
File "/opt/miniconda3/lib/python3.7/posixpath.py", line 378, in abspath
path = os.fspath(path)
TypeError: expected str, bytes or os.PathLike object, not NoneType

Invalid SICDs when using numpy 1.9

setup.py specifies the minimum version of numpy to be 1.9, however the SICD converters do not work correctly with 1.9.

I created a RS2 SICD and the resulting xsd:dateTime fields are improperly formatted. They contain my local time zone in addition to the Z. As a result the XML does not pass the schema.

        <DateTime>2015-11-30T18:13:53.000000-0700Z</DateTime>
        <CollectStart>2008-05-04T01:30:48.413940-0700Z</CollectStart>

This looks to be caused by the datetime behavior change introduced in numpy 1.11.

The simplest fix is probably to change the minimum numpy version to 1.11. If 1.9 is still needed, then the timezone needs to be stripped out, maybe in serialize_plain?

create a sicd.MetaNode from string?

Hi all.

If I have a SICD Meta object from sarpy e.g.

>>> type(my_meta)
<class 'sarpy.io.complex.sicd.MetaNode'>

I can convert it to a string:

>>> meta_string = str(my_meta)

or

>>> meta_string = my_meta.__str__()

or

>>> meta_string = my_meta.__repr__()

But I can't convert that string back to a MetaNode

>>> meta_from_string = sarpy.io.complex.sicd.MetaNode(meta_string) 
TypeError: object() takes no parameters

Is there a way to do this?

Support for reading SICDs directly from cloud storage

As more and more large imagery is kept in cloud object storage, is there any plan for sarpy to be able to access these resources via smart_open or otherwise? Specifically I'm interested in S3 support for reading SICDs, but looking at the reader implementation, the file path is passed down pretty deeply into things.

Problems pickling SICDType objects

Recently, we upgraded from an older commit on GitHub to the latest SarPy (1.1.78) available from PyPI. For the most part, the transition was straightforward; however, pickling has presented an issue. For example:

>>> reader = SICDReader('foo.ntf')
>>> meta = reader.sicd_meta
>>> with open('foo.pkl', 'wb') as fid:
...     pickle.dump(meta, fid)
>>> with open('foo.pkl', 'rb') as fid:
...     meta_pkl = pickle.load(fid)
>>> meta_pkl
SICDType(**OrderedDict())

The deserialized object is a SICDType containing an empty dictionary. Am I doing something wrong here? Or does SICDType not support pickling? Thanks!

problem running fft-example

bradh@minutae:~/sarpy/docs/examples$ python3 ./fft_example.py 
File /home/bradh/Data/sarpy_data/nitf/sicd_example_1_PFA_RE32F_IM32F_HH.nitf is determined to be a SICD (NITF format) file.
ERROR:root:ImpRespBW (15.17029) must be <= DeltaK2 - DeltaK1 (15.17028)
ERROR:root:Issue discovered with Col attribute of type <class 'sarpy.io.complex.sicd_elements.Grid.DirParamType'> of class GridType.
ERROR:root:Issue discovered with Grid attribute of type <class 'sarpy.io.complex.sicd_elements.Grid.GridType'> of class SICDType.
compute the fft to display in range / polar azimuth
Traceback (most recent call last):
  File "./fft_example.py", line 15, in <module>
    cdata = ro.read_chip((0, ro.data_size[0] - 1, 1), (0, ro.data_size[1] - 1, 1))
TypeError: unsupported operand type(s) for -: 'tuple' and 'int'

it looks like its opening the file OK, but that read_chip is returning an tuple of tuple: ((3975, 6724),).

It works better if I make this change:

diff --git a/docs/examples/fft_example.py b/docs/examples/fft_example.py
index 09f7c55..d7e79ae 100644
--- a/docs/examples/fft_example.py
+++ b/docs/examples/fft_example.py
@@ -12,7 +12,7 @@ fname = os.path.expanduser(os.path.join('~/Data/sarpy_data/nitf', 'sicd_example_
 ro = cf.open(fname)
 
 print("compute the fft to display in range / polar azimuth")
-cdata = ro.read_chip((0, ro.data_size[0] - 1, 1), (0, ro.data_size[1] - 1, 1))
+cdata = ro.read_chip((0, ro.data_size[0][0] - 1, 1), (0, ro.data_size[0][1] - 1, 1))
 
 cdata = cdata[0:1000, 0:1000]

I'm not familiar with the code base, so can't say if I'm fixing or working around the actual issue here. Let me know if a PR would help.

read_chip unexpected behavior

When I read a chip using slices, I get what I expect:

import sarpy.io.complex as cf

ro = cf.open(fn) #some .sicd file
ro[0:100,0:100].shape
(100, 100)

When I use the read_chip function without the 3rd parameter in each bound range defined, I would expect to get the same output as above:

ro.read_chip((0,100),(0,100)).shape
(100, 100)

Instead, I do not get any data back:

ro.read_chip((0,100),(0,100)).shape
(0, 0)

If I include the 3rd parameter, things work the way I would expect again

ro.read_chip((0,100,None),(0,100,None)).shape
 (100, 100)

I see the note under the _reorder_arguments function that says that if range = (int,int), then range = (stop, step), and I can get around this by just always making the 3rd argument in the bounds = 1, but this seems like unintuitive behavior.

Refactor application logging to use a custom logger

Currently, the sarpy source code package does logging with the root logger. Something like -

logging.info("This is a comment")

In this current implementation I cannot explicitly set the log level for the sarpy application. If we define a custom logger for sarpy we can explictly tell 3rd party applications that use sarpy to use a specific log level in relation to sarpy logging.

My proposed change is to add some basic logging config to the top level __init__.py. Here is an example of something I use.

import logging

# Setup logging
logger = logging.getLogger(__name__)  # ** __name__ evaluates to "sarpy" here
logger.setLevel(logging.WARNING)

# create console handler and set level to debug
ch = logging.StreamHandler()

# create formatter
formatter = logging.Formatter("%(asctime)s [%(levelname)s] %(name)s: %(message)s")

# add formatter to ch
ch.setFormatter(formatter)

# add ch to logger
logger.addHandler(ch)

In the sarpy source code we can just create a logger like the below and use that anywhere we had previously used logging.XXX.

import logging
logger = logging.getLogger(__name__)  # ** __name__ evalues to "sarpy.my_module" here. The log is passed up the tree to the configured sarpy logger

logger.INFO("Some log")

And in 3rd party apps (notebooks, other src, etc.), we can change the log level of sarpy specifically using something similar.

import logging
import sarpy

logger = logging.getLogger("sarpy")
logger.setLevel(logging.ERROR)  # Set just the sarpy log level, not my current application

Thanks, and let me know if this is something I can help with a PR for.
Zach

No module 'sarpy.io'

With a clean install, the sarpy python modules don't seem to load properly. Please assist

$ git clone https://github.com/ngageoint/sarpy/
$ cd sarpy
$ python3 setup.py install --user
$ cd docs/examples
$ python3 sarpy_example.py
Traceback (most recent call last):
  File "sarpy_example.py", line 6, in <module>
    import sarpy.io.complex as cf
ModuleNotFoundError: No module named 'sarpy.io'

Deprecation Workaround for scipy.misc.comb

scipy.misc.comb is used in csk.py, radarsat.py, and sentinel.py under io/complex/utils, and maybe others. This scipy function has been deprecated since version 1.0.0 (documentation here).

The following solution (from here) is an alternative that requires no external dependencies:

from math import factorial
def comb(n, k):
return factorial(n) / factorial(k) / factorial(n - k)

Please make pip package and/or more elaborate ReadMe for usage

Could you create a python package or at least a wheel for this so people can pip install? The ReadMe also gives very little guidance at the moment as to how to actually use the code.

Without a package, I tried to navigate into the io/complex and follow the "Example of how to use this package" steps, but I'm getting an error saying the following because I am trying to import from geometry, which is above my current root folder of io.

`>>> import complex

reader_object = complex.open('test.h5')
Traceback (most recent call last):
File "", line 1, in
File "complex/init.py", line 70, in open
modules = [sys.modules[names] for names in module_names if import(names)]
File "complex/radarsat.py", line 4, in
from .sicd import MetaNode
File "complex/sicd.py", line 8, in
from ...geometry import geocoords as gc
ValueError: Attempted relative import beyond toplevel package`

If I go up a level and import io.complex as complex, I'm getting the following error, which I don't understand as well.
`>>> import io.complex as complex

reader_object = complex.open('test.h5')
Traceback (most recent call last):
File "", line 1, in
File "io/complex/init.py", line 70, in open
modules = [sys.modules[names] for names in module_names if import(names)]
File "io/complex/radarsat.py", line 4, in
from .sicd import MetaNode
File "io/complex/sicd.py", line 6, in
from .utils import bip
File "io/complex/utils/bip.py", line 3, in
import numpy as np
File "/home/kate/miniconda3/envs/py27/lib/python2.7/site-packages/numpy/init.py", line 142, in
from . import add_newdocs
File "/home/kate/miniconda3/envs/py27/lib/python2.7/site-packages/numpy/add_newdocs.py", line 13, in
from numpy.lib import add_newdoc
File "/home/kate/miniconda3/envs/py27/lib/python2.7/site-packages/numpy/lib/init.py", line 23, in
from .npyio import *
File "/home/kate/miniconda3/envs/py27/lib/python2.7/site-packages/numpy/lib/npyio.py", line 14, in
from ._datasource import DataSource
File "/home/kate/miniconda3/envs/py27/lib/python2.7/site-packages/numpy/lib/_datasource.py", line 220, in
_file_openers = _FileOpeners()
File "/home/kate/miniconda3/envs/py27/lib/python2.7/site-packages/numpy/lib/_datasource.py", line 162, in init
self._file_openers = {None: io.open}
AttributeError: 'module' object has no attribute 'open'
reader_object = complex.open('test.h5')`

I'm excited to use the package once I get it up and running. Thanks!

Reading of ICEYE complex product results with an error

One of the latest changes in the BaseChipper class introduced different set of arguments to __init__ member of the class. The subclass ICEYEChipper is still using old 'complex_type' named argument in its own __init__ method:

    def __init__(self, file_name, data_size, symmetry, complex_type=True, real_group='s_i', imaginary_group='s_q'):
        self._file_name = file_name
        self._real_group = real_group
        self._imaginary_group = imaginary_group
        super(ICEYEChipper, self).__init__(data_size, symmetry=symmetry, complex_type=complex_type)

This results with an error every time ICEYEChipper is used:

File "(...)\sarpy\io\complex\iceye.py", line 469, in init
super(ICEYEChipper, self).init(data_size, symmetry=symmetry, complex_type=complex_type)
TypeError: init() got an unexpected keyword argument 'complex_type'

I believe the correct version should be:

    def __init__(self, file_name, data_size, symmetry, transform_data='complex', real_group='s_i', imaginary_group='s_q'):
        self._file_name = file_name
        self._real_group = real_group
        self._imaginary_group = imaginary_group
        super(ICEYEChipper, self).__init__(data_size, symmetry=symmetry, transform_data=transform_data)

This fixed the problem for me, but I can't push to the repo.

AttributeError: 'module' object has no attribute 'BufferedIOBase'

I used setup.py build and install to have things properly installed under Ubuntu 16.04.4. I then tried to run the sarpy_example.py but I'm stuck in this error:

AttributeError: 'module' object has no attribute 'BufferedIOBase'

After some reading, it looks to be a conflict with the 'io' directory within 'sarpy':

https://stackoverflow.com/questions/32657580/class-gzipfileio-bufferediobase-attributeerror-module-object-has-no-attrib?noredirect=1&lq=1

I'm really not familiar with python and I don't know how to fix similar issues. It seems that renaming such 'io' directory to something else should solve the problem, but then all the code and package paths should be updated accordingly.

sicd_meta.RadarCollection.Area.Plane.SegmentList Failed converting XML

I have a SICD with with a RadarCollection.Area.Plane segment defined in the DES, but I get an error when I try loading the file using sarpy.
I created this example XML (modified from the SIX repo) which validates against the XML schema:
sicd_example_RMA_RGZERO_RE32F_IM32F_cropped_multiple_image_segments_PlaneArea-XML_DATA_CONTENT0.txt

xmllint --schema sarpy/sarpy/io/complex/sicd_schema/SICD_schema_V1_1_0_2014_07_08.xsd sicd_example_RMA_RGZERO_RE32F_IM32F_cropped_multiple_image_segments_PlaneArea-XML_DATA_CONTENT0.txt > xmllint.txt
sicd_example_RMA_RGZERO_RE32F_IM32F_cropped_multiple_image_segments_PlaneArea-XML_DATA_CONTENT0.txt validates

When I manually read this XML using sarpy, I get an error:

>>> from sarpy.io.complex.sicd_elements.SICD import SICDType
>>> with open('sicd_example_RMA_RGZERO_RE32F_IM32F_cropped_multiple_image_segments_PlaneArea-XML_DATA_CONTENT0.txt','rb') as f:
>>>      des_bytes = f.read()
>>> root_node, xml_ns = utils.parse_xml_from_string(des_bytes.decode('utf-8').strip())
>>> sicd_meta = SICDType.from_node(root_node, xml_ns, ns_key='default')
ERROR:root:Failed converting <Element '{urn:SICD:1.1.0}Plane' at 0x7f04263467c8> of type <class 
xml.etree.ElementTree.Element'> to Serializable type <class 
'sarpy.io.complex.sicd_elements.RadarCollection.ReferencePlaneType'> for field Plane of class AreaType 
with exception <class 'ValueError'> - Attribute coords of array type functionality belonging to class 
SerializableArray got a ElementTree element with size attribute 3, but has 0 child nodes with tag 
SegmentList.. Setting value to None, which may be against the standard.

It looks like if you change sarpy.io.complex.sicd_elements.RadarCollection.ReferencePlaneType._collections_tags to {'SegmentList': {'array': True, 'child_tag': 'Segment'}}, it seems to import.

example for writing a SICD?

Reading a SICD was pretty straightforward:

from sarpy.io.complex import open
sp = open(sicd_file)
meta = sp.sicd_meta
cimg = sp.read_chip(1, 1)

Is there an example for writing a SICD in the source code? Having trouble finding something like

import sarpy.io.complex.SICDWriter
SICDWriter(image=cimg, meta=meta, filename='out.sicd')

Embedded records in ENGRDA TREs not parsed correctly

When trying to process a NITF with an ENGRDA TRE, I'm finding that the embedded Records fail to parse. Rather than starting at an offset into the TRE data (at ENGLN, 23 bytes in, after RESRC and RECNT), they're starting at the beginning (i.e. the RESRC) field and failing because because what's parsed is not an integer, as expected, but character data.

This is possibly a bug in TRELoop. Changing

 entry = child_type(value, *args, **kwargs)

to

 entry = child_type(value[loc:], *args, **kwargs)

fixed this for me locally.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.