Giter VIP home page Giter VIP logo

slix's People

Contributors

miriammenzel avatar oliviaguest avatar thyre avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

slix's Issues

[DOC] Integration with Fiber Tractography Software

In your bioarxiv paper, it says "We developed the open-source software SLIX (Scattered Light Imaging
ToolboX) that allows an automated evaluation of the measurement, and the computation of different parameter maps
containing various tissue information. The resulting fiber direction maps can be used, for example, as input for fiber
tractography algorithms."

From the perspective of a user, I'm not sure if SLIX meets the JOSS requirement that "that software should be feature-complete" without documentation and an example on how to integrate with a tractography software or algorithm. It looks like it is very close to being able to be used in a TRACULA pipeline for instance (there might be a step or two missing such as coregistration), but it is not apparent, to me at least, how the SLIX output images would be integrated.

openjournals/joss-reviews#2675

Publish package on PyPi

As @matteomancini mentions in the review for the JOSS paper, the installation process could be simplified for many Python users by uploading the SLIX package to PyPi. There are several advantages such as a one-line installation without manual download or a simpler update process.

I haven't actually released anything in PyPi yet and I don't know how to do it, but I will definitely look at it, as it would be a great improvement for the users and it might also make the package easier to find.

[DOC] Understanding sample data

I think it's a bit hard to figure out how to use the code without explaining the sample data better. Are the images 24 light angles that are 1314 x 1176 pixels?

I assume you would get images in some image format from the microscope, but then it's not clear how you would convert that to nii with the light angle stored in the nii metadata. And for the .tiff files, how is the light angle stored? That it if indeed the third dimension is light angle.

I think maybe this should be included to explain how to get from raw data capture to results/parameter maps that can be used in many ways to find results.

import nibabel as nib
img = nib.load('SLI-human-Sub-01_2xOpticTracts_s0037_30um_SLI_105_Stack_3days_registered.nii')
img.shape
(1314, 1176, 24, 1)

openjournals/joss-reviews#2675

[BUG, DOC] Running an example end of file error

I downloaded Vervet1818_s0512_60um_SLI_090_Stack_1day.nii from the link in the README example and tried to run it directly as input. In the example gif, I saw that the input took at stack.tif. Does SLIXParameterGenerator take a nii file as input, and where is it documented what input types are accepted? Also, it would be helpful to have clear instructions on how to run an example with this or another file.

If SLIXParameterGenerator does take an nii file as input (it did look like the command made it some ways), it hung up after completing the ROI command so I stopped execution. Any idea what may be going wrong?

(venv) Alexs-MacBook-Pro:bin alexrockhill$ ./SLIXParameterGenerator -i ../Vervet1818_s0512_60um_SLI_090_Stack_1day.nii -o ../output/
SLI Feature Generator:
Number of threads: 4

Chosen feature maps:
Direction maps: True
Peak maps: True
Peak prominence map: True
Peak width map: True
Peak distance map: True
Optional maps: False

../Vervet1818_s0512_60um_SLI_090_Stack_1day.nii
Roi finished




^CTraceback (most recent call last):
  File "./SLIXParameterGenerator", line 233, in generate_feature_maps
    resulting_parameter_maps[i, current_index] = toolbox.prominence(peak_positions_high_non_centroid, roi)
  File "/Users/alexrockhill/software/SLIX/venv/lib/python3.7/site-packages/SLIX-1.1-py3.7.egg/SLIX/toolbox.py", line 187, in prominence
  File "/Users/alexrockhill/software/SLIX/venv/lib/python3.7/site-packages/SLIX-1.1-py3.7.egg/SLIX/toolbox.py", line 657, in normalize
  File "<__array_function__ internals>", line 6, in mean
  File "/Users/alexrockhill/software/SLIX/venv/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 3372, in mean
    return _methods._mean(a, axis=axis, dtype=dtype,
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./SLIXParameterGenerator", line 366, in <module>
    args['with_smoothing'], args['mask_threshold'])
  File "./SLIXParameterGenerator", line 69, in full_pipeline
    parameter_maps = generate_feature_maps(roiset, selected_methods)
  File "./SLIXParameterGenerator", line 249, in generate_feature_maps
    current_index += 3
  File "/Users/alexrockhill/software/SLIX/venv/lib/python3.7/site-packages/pymp/__init__.py", line 122, in __exit__
    self._exception_queue.put((exc_t, exc_val, self._thread_num))
  File "<string>", line 2, in put
  File "/Users/alexrockhill/software/anaconda3/envs/swannlab/lib/python3.7/multiprocessing/managers.py", line 819, in _callmethod
    kind, result = conn.recv()
  File "/Users/alexrockhill/software/anaconda3/envs/swannlab/lib/python3.7/multiprocessing/connection.py", line 250, in recv
    buf = self._recv_bytes()
  File "/Users/alexrockhill/software/anaconda3/envs/swannlab/lib/python3.7/multiprocessing/connection.py", line 407, in _recv_bytes
    buf = self._recv(4)
  File "/Users/alexrockhill/software/anaconda3/envs/swannlab/lib/python3.7/multiprocessing/connection.py", line 383, in _recv
    raise EOFError
EOFError

openjournals/joss-reviews#2675

[DOC] Main Functionality Example

I ran the first example and got the following figures output.zip. They look a bit sparse.

To make "a detailed reconstruction of the brain's nerve fiber architecture" as said in the paper, seems to me to be more like Figure 2i which I don't see in the output.

It would be helpful to have

  1. explanations like those in the last section of the README for the examples and
  2. the example generate a figure like 2i which I think would better qualify as a reconstruction of nerve architecture

Also, it is not immediately obvious how one would proceed with Figure 2i as input to do some kind of analysis. To bring up freesurfer as an example again, in the reconstruction you have different brain regions segmented out in volumetric boundaries. This seems like a fairly crucial part of a reconstruction; to have some format separating each nerve fiber. Perhaps I am not understanding the directional .tiff images appropriately if they in fact do just that.

openjournals/joss-reviews#2675

[DOC] Methods explanations

JOSS policy requires "A summary describing the high-level functionality and purpose of the software for a diverse, non-specialist audience."

Methodological choices such as why 8% prominence is used to qualify as a prominent peak or even why using prominent peaks is the best method for identifying crossing fibers don't seem readily understandable from the paper and documentation for a diverse audience.

Perhaps the best way to explain the methods would be to have an example or documentation walking the user through the process from raw data through each of the figures and to the output, considering parameter and processing choices along the way.

How and when to use --with_mask and --with_smoothing especially are not readily apparent.

EDIT: from reading appendix A of the Bioarvix paper, I see why you chose 8% prominence but it seems like the intensity range (Imax - Imin) may vary across setups, I'm not sure I would use 8% as a hard rule but would want to maybe check on some small section of the data that could be inspected manually or some other solution. Or maybe the goodness-of-fit and this and other parameters can be ascertained from Figure 8 (the parameter maps) but it is not clear from the documentation how to do so.

openjournals/joss-reviews#2675

[FEATURE] Correct visualization of Scatterometry measurements

Is your feature request related to a problem? Please describe.
In the current state of SLIX, angular measurements like shown in the whole repository can be evaluated without large issues. The parameter maps are generated and stored in the output folder like expected.

While Scatterometry measurements are a whole different scope and a similar toolbox to SLIX is developed in house, one sometimes wants to visualize the Scatterometry patterns with the available vector visualization of SLIX.

Currently, this yields issues because the vectors of the output image and the patterns aren't aligned.

Describe the solution you'd like
The general idea would be to introduce either a new parameter, which allows the user to indicate, that his data isn't in a regular format, or some kind of automation which recognizes if the input direction images and the background image is nowhere near the intended shape.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.