Giter VIP home page Giter VIP logo

draco's People

Contributors

anjakefala avatar arnab-half-blood-prince avatar cahofer avatar fandinomat avatar jdmena avatar jmaceachern avatar josephwkania avatar jrs65 avatar ljgray avatar nritsche avatar rikvl avatar sjforeman avatar ssiegelx avatar tristpinsm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

draco's Issues

IOError with Nandump

I am having issues with commit 1341eb8
(check task output for NaN's/Inf's and log/dump/skip them). For now I am just going to set nan_dump = False in order not to run into this issue.

I did not look into this bug in more detail but I think it has to do with the fact that its trying to dump the file on each rank because error message is

IOError: Unable to create file (unable to open file: name = 'nandump_LoadDataFiles_0.h5', errno = 17, error message = 'File exists', flags = 15, o_flags = c2

This is the full traceback (sorry this is run with 4 processes - work fine with one):

273.6s [MPI 2/4] - INFO draco.core.task.LoadDataFiles: Reading file 1 of 774. (/project/rpp-krs/chime/chime_archive/20121207T174000Z_mingun_weather/20140621.h5)
273.6s [MPI 2/4] - INFO draco.core.task.LoadDataFiles: NaN's found in dataset /windGustDir [13 of 288 elements]
273.6s [MPI 2/4] - INFO draco.core.task.LoadDataFiles: NaN's found in dataset /windDir [13 of 288 elements]
273.6s [MPI 0/4] - DEBUG draco.core.task.LoadDataFiles: NaN found. Dumping nandump_LoadDataFiles_0.h5
273.6s [MPI 1/4] - DEBUG draco.core.task.LoadDataFiles: NaN found. Dumping nandump_LoadDataFiles_0.h5
273.6s [MPI 2/4] - DEBUG draco.core.task.LoadDataFiles: NaN found. Dumping nandump_LoadDataFiles_0.h5
273.6s [MPI 3/4] - DEBUG draco.core.task.LoadDataFiles: NaN found. Dumping nandump_LoadDataFiles_0.h5
File "/project/6003614/cahofer/ch_pipeline/venv/src/draco/draco/core/task.py", line 327, in next
out = self.next(*args)
File "/project/6003614/cahofer/ch_pipeline/venv/src/draco/draco/core/task.py", line 327, in next
dispatch(parser, *args, **kwargs)
File "/project/6003614/chime/chime_env/2018_04/base/lib/python2.7/site-packages/argh/dispatching.py", line 174, in dispatch
for line in lines:
File "/project/6003614/chime/chime_env/2018_04/base/lib/python2.7/site-packages/argh/dispatching.py", line 277, in _execute_command
for line in result:
File "/project/6003614/chime/chime_env/2018_04/base/lib/python2.7/site-packages/argh/dispatching.py", line 260, in _call
result = function(*positional, **keywords)
File "/project/6003614/cahofer/ch_pipeline/venv/src/caput/scripts/caput-pipeline", line 24, in run
P.run()
File "/project/6003614/cahofer/ch_pipeline/venv/src/caput/caput/pipeline.py", line 473, in run
out = task._pipeline_next()
File "/project/6003614/cahofer/ch_pipeline/venv/src/caput/caput/pipeline.py", line 818, in _pipeline_next
out = self.next(*args)
File "/project/6003614/cahofer/ch_pipeline/venv/src/draco/draco/core/task.py", line 327, in next
output = self._nan_process_output(output)
File "/project/6003614/cahofer/ch_pipeline/venv/src/draco/draco/core/task.py", line 394, in _nan_process_output
output = self._nan_process_output(output)
output = self._nan_process_output(output)
File "/project/6003614/cahofer/ch_pipeline/venv/src/draco/draco/core/task.py", line 394, in _nan_process_output
File "/project/6003614/cahofer/ch_pipeline/venv/src/draco/draco/core/task.py", line 394, in _nan_process_output
self.write_output(outfile, output)
File "/project/6003614/cahofer/ch_pipeline/venv/src/caput/caput/pipeline.py", line 1275, in write_output
self.write_output(outfile, output)
File "/project/6003614/cahofer/ch_pipeline/venv/src/caput/caput/pipeline.py", line 1275, in write_output
self.write_output(outfile, output)
File "/project/6003614/cahofer/ch_pipeline/venv/src/caput/caput/pipeline.py", line 1275, in write_output
output.save(filename)
File "/project/6003614/cahofer/ch_pipeline/venv/src/caput/caput/memh5.py", line 1467, in save
output.save(filename)
File "/project/6003614/cahofer/ch_pipeline/venv/src/caput/caput/memh5.py", line 1467, in save
output.save(filename)
File "/project/6003614/cahofer/ch_pipeline/venv/src/caput/caput/memh5.py", line 1467, in save
self._data.to_hdf5(filename, **kwargs)
File "/project/6003614/cahofer/ch_pipeline/venv/src/caput/caput/memh5.py", line 483, in to_hdf5
self._data.to_hdf5(filename, **kwargs)
File "/project/6003614/cahofer/ch_pipeline/venv/src/caput/caput/memh5.py", line 483, in to_hdf5
with h5py.File(filename, **kwargs) as f:
File "/project/6003614/chime/chime_env/2018_04/base/lib/python2.7/site-packages/h5py/_hl/files.py", line 394, in init
with h5py.File(filename, **kwargs) as f:
File "/project/6003614/chime/chime_env/2018_04/base/lib/python2.7/site-packages/h5py/_hl/files.py", line 394, in init
swmr=swmr)
File "/project/6003614/chime/chime_env/2018_04/base/lib/python2.7/site-packages/h5py/_hl/files.py", line 195, in make_fid
fid = h5f.create(name, h5f.ACC_EXCL, fapl=fapl, fcpl=fcpl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
self._data.to_hdf5(filename, **kwargs)
File "/project/6003614/cahofer/ch_pipeline/venv/src/caput/caput/memh5.py", line 483, in to_hdf5
with h5py.File(filename, **kwargs) as f:
File "/project/6003614/chime/chime_env/2018_04/base/lib/python2.7/site-packages/h5py/_hl/files.py", line 394, in init
swmr=swmr)
File "/project/6003614/chime/chime_env/2018_04/base/lib/python2.7/site-packages/h5py/_hl/files.py", line 195, in make_fid
fid = h5f.create(name, h5f.ACC_EXCL, fapl=fapl, fcpl=fcpl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 105, in h5py.h5f.create
IOError: Unable to create file (unable to open file: name = 'nandump_LoadDataFiles_0.h5', errno = 17, error message = 'File exists', flags = 15, o_flags = c2

Insert version and config into the "history"

BasicCont object have a .history section, this is where we should have added the version and config tracking, and so we should put them there (using the .add_history(...) call), rather than in the root metadata.

Generate a Gaussian noise dataset

This task should:

  • Use the noise estimates of an existing dataset to replace the data section (use the weight property).
  • Support multiple container types. I'm not sure duck typing will work, because the data section has different names, and sometimes is real and sometimes complex, so you might need to specify the properties for different types of data
  • Support both draco and CHIME timestream types without explicitly importing CHIME packages (this you may need to duck type)
  • Efficiently generate random numbers (use RandomGen which is already a dependency)

@tristpinsm feel free to add items to this one.

Intelligently set `z_error` field for mock catalogs

The SpectroscopicCatalog container has a z_error field that currently isn't modified by the Add{Gaussian,EBOSS}ZErrorsToCatalog tasks. It would make sense to store the standard deviation of the distribution the z errors are drawn from (added in quadrature to the existing z_error value if one exists).

Generalize chime eigen-calibration to arbitrary driftscan telescopes

  • Move tasks from ch_pipeline.analysis.calibration to draco.analysis.calibration:
    • DetermineSourceTransit
    • TransitFit
    • GainFromTransitFit
    • FlagAmplitude
  • Update tasks to obtain feed information (polarisation and position) from a telescope instance that is provided during setup instead of an input map.
  • Update tasks to use telescope instance for ephemeris calculations instead of ch_util.ephemeris.
  • Create EigenContainer with the the equivalent of the evec, eval, and erms datasets from the chime real-time pipeline. Create a subclass of TimeStream and EigenContainer that can replicate the chimecal acquisitions.
  • Write new task that performs eigendecomposition of N2 visibility matrix and populates an EigenContainer.
  • Remove dependence on ch_util modules. We will need to decide where to put these:
    • fluxcat
    • cal_utils

Migrate to NumPy 1.17

This ticket is to track the migrating of the code to NumPy 1.17.

In NumPy 1.17, RandomGen was integrated into NumPy.

The advantage of using RandomGen over Numpy's legacy random is mainly performance.

From @jrs65:

As @tristpinsm says the advantage is performance, and there are certain tasks (of which this will be one) where the speed of the RNG is the bottleneck. I introduced it for the delay power spectrum estimator (which in some sense internally does what your doing here hundreds of times), and it took it down from 40 mins per power spectrum to more like 10 mins.

However, the changes made to the NumPy API as part of this integration were substantial. A small excerpt:

So, seeding seems to work in different ways for the "legacy random" and the "new generators".

RandomState provides access to legacy random https://numpy.org/devdocs/reference/random/legacy.html. get_state/set_state/seed specifically work with the legacy randoms https://numpy.org/devdocs/reference/random/legacy.html?highlight=seed.

The new RandomGenerator works by initialising a generator with a seed https://numpy.org/devdocs/reference/random/generator.html#numpy.random.Generator. SeedSequence https://numpy.org/devdocs/reference/random/bit_generators/generated/numpy.random.SeedSequence.html#numpy.random.SeedSequence is the main class that determines the sequence of seeds.

So if we bump to NumPy 1.17, it will be a bit of a refactor, and the two random generators do not intersect with their seed states.

Fixes for ThresholdVisWeight

This task has a few issues that should be fixed up.

Overall the point of the task is two fold:

  • Identify gaps in the regridded data that are undetermined because of RFI. These regions have very small weights (like ~1e-8), rather than zero weights, because of the way the regridder works.
  • Identify regions on the edges of RFI generated gaps in the data where the regridder hasn't done a good job, and the data should not be trusted. These often have weights that are a reasonable fraction of the typical weight for data that is present (something like ~20%), but this isn't really a good estimate of their uncertainty.

Both of these regions should have their weights set to zero.

As implemented this task has several issues:

  • The major issue with this task is that the flagging decisions end up being baseline dependent, but the problem that it's trying to solve (i.e. the regridding issues) is baseline independent. This causes problems downstream as we pretty much assume that all the flagging like this depends only on time/frequency sample, but not on the actual baseline etc.
  • At high frequencies near the edge of the band the weights roll off where we lose sensitivity. The current way that the fractional thresholding is done can cause those frequencies to be flagged out, even when they're actually fine (it's just that even in the best case that frequency has lower sensitivity).
  • The way the flagging is performed is within the task itself. But typically we have a pattern of a task that determines that the masking should be and then another task which applies it. If the masking becomes baseline independent, then it can be represented as a standard RFI mask container, and can be applied with the usual ApplyRFIMask task.

I think a reasonable path to achieve this is:

  • Redistribute the data over frequencies.
  • Aggregate along the weights along the baseline axis. A simple mean/median over this axis should suffice.
  • Apply an absolute value cut to the aggregated weights to samples values lower than a configurable threshold.
  • Take the mean/median in time for each frequency after the previous mask (i.e. you want for each frequency, the typical value of samples that survived the absolute value cut).
  • Use this as the baseline for the relative weight cut, i.e. for each sample mask it out if it is less than some threshold * aggregate for the current frequency.
  • Generate an RFI mask container based on this.

Allow setting arbitrary attributes via params in tasks

This enhancement is to add to draco.core.task.SingleTask a config option which allows setting of arbitrary output parameters on any BasicCont style outputs.

This would be something like:

<task stuff>
params:
    attributes:
        tracer: QSO
        oldtag: "oldtag_{tag}"

where the {...} allows interpolation of strings using any item already within the params (plus maybe the count parameter from each task).

Clean up warnings generated by unit tests

=============================== warnings summary ===============================

test/test_write_metadata.py::test_metadata_to_hdf5

test/test_write_metadata.py::test_metadata_to_yaml

test/test_write_metadata.py::test_metadata_to_yaml

  /home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/caput/pipeline.py:792: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() or inspect.getfullargspec()

    setup_argspec = inspect.getargspec(self.setup)

test/test_write_metadata.py::test_metadata_to_hdf5

test/test_write_metadata.py::test_metadata_to_yaml

test/test_write_metadata.py::test_metadata_to_yaml

  /home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/caput/pipeline.py:813: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() or inspect.getfullargspec()

    next_argspec = inspect.getargspec(self.next)

test/test_write_metadata.py::test_metadata_to_hdf5

test/test_write_metadata.py::test_metadata_to_yaml

test/test_write_metadata.py::test_metadata_to_yaml

  /home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/draco/core/task.py:302: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() or inspect.getfullargspec()

    pro_argspec = inspect.getargspec(self.process)

test/test_write_metadata.py::test_metadata_to_yaml

  /home/travis/virtualenv/python3.7.1/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject

    return f(*args, **kwds)

-- Docs: https://docs.pytest.org/en/latest/warnings.html

==================== 2 passed, 10 warnings in 0.34 seconds =====================

Axis selections as list of indices don't work with `mpiarray`

Although LoadFilesFromParams accepts lists of indices for axis selections, it appears that caput.mpiarray only supports slices.

Here is the relevant traceback for a pipeline that attempted to use LoadFilesFromParams with a freq_index property.

    output = self.process()
  File "/project/6003614/tristpm/maps/code/draco/draco/core/io.py", line 528, in process
    cont = self._load_file(file_)
  File "/project/6003614/tristpm/maps/code/draco/draco/core/io.py", line 434, in _load_file
    cont = new_cls.from_file(
  File "/project/6003614/tristpm/maps/code/caput/caput/memh5.py", line 1550, in from_file
    data = MemGroup.from_hdf5(
  File "/project/6003614/tristpm/maps/code/caput/caput/memh5.py", line 469, in from_hdf5
    self = _distributed_group_from_hdf5(
  File "/project/6003614/tristpm/maps/code/caput/caput/memh5.py", line 2601, in _distributed_group_from_hdf5
    _copy_from_file(f, group, selections)
  File "/project/6003614/tristpm/maps/code/caput/caput/memh5.py", line 2568, in _copy_from_file
    pdata = mpiarray.MPIArray.from_hdf5(
  File "/project/6003614/tristpm/maps/code/caput/caput/mpiarray.py", line 659, in from_hdf5
    gshape.append(_len_slice(sl, l))
  File "/project/6003614/tristpm/maps/code/caput/caput/mpiarray.py", line 1110, in _len_slice
    start, stop, step = slice_.indices(n)
AttributeError: 'list' object has no attribute 'indices'

containers - issue with prod / stack axis

I pulled the most recent version of draco which I haven't done in a a few months (includes most recent pull request #13 ) and there seems to be some issue with the new SiderealStream container when I try to simulate a sidereal stream:

When using draco/synthesis.stream.SimulateSidereal(task.SingleTask)

sstream = containers.SiderealStream(freq=freqmap, ra=ntime, input=feed_index, prod=tel.uniquepairs, distributed=True, comm=map_.comm)

File "/project/6003614/cahofer/ch_pipeline/venv/src/draco/draco/core/containers.py", line 571, in __init__
stack['prod'][:] = np.arange(len(prod))
ValueError: could not broadcast input array from shape (752) into shape (752,2)

prod is a (752,2) array in the old format.

Seems like this is already a quite old change (4 months ago by tristan)

_pack_marray function from commit 1d4730 buggy

here the traceback
File "/home/pboubel/code/draco/draco/analysis/transform.py", line 548, in process marray = _make_marray(sstream.vis[:], mmax) File "/home/pboubel/code/draco/draco/analysis/transform.py", line 568, in _make_marray marray = _pack_marray(mmodes, mmax) File "/home/pboubel/code/draco/draco/analysis/transform.py", line 593, in _pack_marray marray[:mlim+1, 0] = mmodes[:mlim+1] # Non-negative modes ValueError: could not broadcast input array from shape (3,7155,4096) into shape (2049,3,7155)

This worked before the latest changes.

Error from find_key() in analysis.BaseMapMaker

When I try to use DirtyMapMaker on some input m-modes, I get an error with the following traceback:

  File "/home/sforeman/ch/ch_pipeline/src/draco/draco/core/task.py", line 329, in next
    output = self.process(*input)
  File "/home/sforeman/ch/ch_pipeline/src/draco/draco/analysis/mapmaker.py", line 84, in process
    freq_ind = [find_key(bt_freq, mf) for mf in mm_freq]
  File "/home/sforeman/ch/ch_pipeline/src/draco/draco/analysis/mapmaker.py", line 84, in <listcomp>
    freq_ind = [find_key(bt_freq, mf) for mf in mm_freq]
  File "/home/sforeman/ch/ch_pipeline/src/draco/draco/analysis/mapmaker.py", line 74, in find_key
    return map(tuple, list(key_list)).index(tuple(key))
AttributeError: 'map' object has no attribute 'index'

StackOverflow (https://stackoverflow.com/questions/33717314/attributeerror-map-obejct-has-no-attribute-index-python-3) says this is a Python 3 compatibility thing (I'm using Python 3.6.9), but the fix they recommend doesn't work either. On the other hand, if we take the find_key() routine,

def find_key(key_list, key):
try:
return map(tuple, list(key_list)).index(tuple(key))
except TypeError:
return list(key_list).index(key)
except ValueError:
return None

and change line 75 from except TypeError: to except (TypeError, AttributeError):, everything runs fine. Should I submit a PR with this change?

Numerical issues in `SmoothVisWeights` -> `SiderealRegridder` tasks.

This is an issue to track the NaN/inf values appearing in the SiderealRegridder task. Based on the current testing:

  1. Using a weighted median filter in SmoothVisWeights (draco#187) and setting weights to zero where vis_weights are zero avoids the issue, suggesting that it's related to the inclusion of zeros in the median calculation in SmoothVisWeights. Setting all weights to 1 or using scipy.ndimage.median_filter produces the error.
  2. The error occurs on different ranks/subsets of the data, making me think it's related to some floating point error in very small/large values
  3. Oddly, saving the data returned by SmoothVisWeights and then feeding that into SiderealRegridder in either a notebook or a new pipeline task avoids the issue. The only thing I can think of here is that the save/load process affects the data in a subtle way

Fix crash in beamformer when no data is available

On days where there is no data available (e.g. CSD=2214) the daily pipeline will run and not do anything except the BeamFormCat task which crashes as the .epoch attribute hasn't been set (as it derives from the data).

Traceback (most recent call last):
  File "/home/jrs65/chime_pipeline_stable/code/caput/caput/scripts/runner.py", line 430, in <module>
    cli()
  File "/project/rpp-chime/chime/chime_env/modules/chime/python/2021.03/lib/python3.8/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/project/rpp-chime/chime/chime_env/modules/chime/python/2021.03/lib/python3.8/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/project/rpp-chime/chime/chime_env/modules/chime/python/2021.03/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/project/rpp-chime/chime/chime_env/modules/chime/python/2021.03/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/project/rpp-chime/chime/chime_env/modules/chime/python/2021.03/lib/python3.8/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/home/jrs65/chime_pipeline_stable/code/caput/caput/scripts/runner.py", line 148, in run
    P.run()
  File "/project/6003614/chime/chime_processed/daily/rev_03/code/caput/caput/pipeline.py", line 603, in run
    out = task._pipeline_next()
  File "/project/6003614/chime/chime_processed/daily/rev_03/code/caput/caput/pipeline.py", line 1038, in _pipeline_next
    out = self.next(*args)
  File "/project/6003614/chime/chime_processed/daily/rev_03/code/draco/draco/core/task.py", line 319, in next
    output = self.process(*input)
  File "/project/6003614/chime/chime_processed/daily/rev_03/code/draco/draco/analysis/beamform.py", line 695, in process
    self._process_catalog(source_cat)
  File "/project/6003614/chime/chime_processed/daily/rev_03/code/draco/draco/analysis/beamform.py", line 611, in _process_catalog
    catalog["position"]["ra"], catalog["position"]["dec"], self.epoch
AttributeError: 'BeamFormCat' object has no attribute 'epoch'

TypeError: process() missing 1 required positional argument: 'inp'`

Hi all,

I am trying to implement a pipeline for measuring power spectrum via draco (Or more specifically the tools for
Radio Cosmology
). I wrote a config file with the appropriate tasks but I get the error:

WARNING:draco.synthesis.stream.SimulateSidereal:Use of output_root is deprecated. Traceback (most recent call last): File "/cluster/home/bin/caput-pipeline", line 8, in <module> sys.exit(cli()) File "/cluster/apps/nss/python/3.7.4/x86_64/lib64/python3.7/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/cluster/apps/nss/python/3.7.4/x86_64/lib64/python3.7/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/cluster/apps/nss/python/3.7.4/x86_64/lib64/python3.7/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/cluster/apps/nss/python/3.7.4/x86_64/lib64/python3.7/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/cluster/apps/nss/python/3.7.4/x86_64/lib64/python3.7/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/cluster/home/lib64/python3.7/site-packages/caput/scripts/runner.py", line 161, in run P.run() File "/cluster/home/lib64/python3.7/site-packages/caput/pipeline.py", line 633, in run out = task._pipeline_next() File "/cluster/home/lib64/python3.7/site-packages/caput/pipeline.py", line 1085, in _pipeline_next out = self.next(*args) File "/cluster/home/lib64/python3.7/site-packages/draco/core/task.py", line 347, in next output = self.process(*input) TypeError: process() missing 1 required positional argument: 'inp'

I couldn't pinpoint the source of the error and was not sure if it's related to the config file I wrote. Can you please help me with this?

Cythonize `invert_no_zero`

The numpy.where call in util.tools.invert_no_zero unnecessarily makes a full copy of the array in memory. We should re-implement this function with Cython to do it in place, and probably also parallelise it too.

Incompatibility of hybrid ringmap maker with simulated sidereal streams

The routines in draco.analysis.ringmapmaker are incompatible with sidereal streams simulated with non-CHIME telescope classes in at least two ways:

  1. The following check in MakeVisGrid fails:

    if np.all(
    sstream.prodstack.view(np.uint16).reshape(-1, 2)
    != self.telescope.uniquepairs
    ):
    raise ValueError(
    "Products in sstream do not match those in the beam transfers."
    )

    This appears to come from the view of sstream.prodstack as np.uint16. Changing the type to np.int resolves things:
    Screen Shot 2021-03-02 at 9 41 00 AM

  2. Sidereal streams from draco.synthesis.stream.SimulateSidereal do not have reverse_map["stack"], causing this to fail in MakeVisGrid:

    # Calculate the redundancy
    redundancy = tools.calculate_redundancy(
    sstream.input_flags[:],
    sstream.index_map["prod"][:],
    sstream.reverse_map["stack"]["stack"][:],
    sstream.vis.shape[1],
    )

    Since reverse_map["stack"] is constructed by a few lines in draco.analysis.transform.CollateProducts, we could simply copy these lines into draco.synthesis.stream.SimulateSidereal to ensure that simulated sidereal streams have this field. This would probably help for future compatibility of simulated sidereal streams and other routines as well.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.