Giter VIP home page Giter VIP logo

deepslice's Introduction

DOI

Alt DeepSlice is a python library which automatically aligns mouse histology with the allen brain atlas common coordinate framework (and now rat brain histology to the Waxholm rat brain atlas, though this is in beta). The alignments are viewable, and refinable, using the QuickNII software package. DeepSlice requires no preprocessing and works on any stain, however we have found it performs best on brightfield images. At present one limitation is that it only works on Coronally cut sections, we will release an update in the future for sagittal and horizontally cut histology. Alt DeepSlice automates the process of identifying exactly where in the brain a section lies, it can accomodate non-orthogonal cutting planes and will produce an image specific annotation for each section in your brain.

Citation

If you use DeepSlice in your work please cite Carey et al, 2023. It may also be useful if you mention the version you use :)

Workflow

DeepSlice is fully integrated with the QUINT workflow. Quint helps you register, segment and quantify brain wide datasets! Β  πŸ­πŸ§ πŸ”¬πŸ’»πŸ€–

Web Application

If you would like to use DeepSlice but don't need your own personal installation, check out DeepSlice Flask, a fully functional web application which will allow you to upload your dataset and download the aligned results. The web interface was developed by Michael Pegios.

Happy Aligning :)


Installation

From PIP

This is the easy and recommended way to install DeepSlice, first make sure you have Python 3.11 installed and then simply:
pip install DeepSlice

And you're ready to go! πŸš€ Check out the PyPi package here

If you run into any problems create a github issue and I will help you solve it.


Basic Usage

On start

After cloning our repo and navigating into the directory open an ipython session and import our package.

from DeepSlice import DSModel     

Next, specify the species you would like to use and initiate the model.

species = 'mouse' #available species are 'mouse' and 'rat'

Model = DSModel(species)

Important

  • Sections in a folder must all be from the same brain

  • DeepSlice uses all the sections you select to inform its prediction of section angle. Thus it is important that you do not include sections which lie outside of the Allen Brain Atlas. This include extremely rostral olfactory bulb and caudal medulla. If you include these sections in your selected folder it will reduce the quality of all the predictions.

  • The sections do not need to be in any kind of order.

  • The model downsamples images to 299x299, you do not need to worry about this but be aware that there is no benefit from using higher resolutions.


Predictions

Now your model is ready to use, just direct it towards the folder containing the images you would like to align.

eg:

    
 β”œβ”€β”€ your_brain_folder
 β”‚   β”œβ”€β”€ brain_slice_1.png 
 β”‚   β”œβ”€β”€ brain_slice_2.png     
 β”‚   β”œβ”€β”€ brain_slice_3.png

In this parent directory there should be only one sub folder, in this example this is "your_brain_folder".

To align these images using DeepSlice simply call

folderpath = 'examples/example_brain/GLTa/'
#here you run the model on your folder
#try with and without ensemble to find the model which best works for you
#if you have section numbers included in the filename as _sXXX specify this :)
Model.predict(folderpath, ensemble=True, section_numbers=True)    
#If you would like to normalise the angles (you should)
Model.propagate_angles()                     
#To reorder your sections according to the section numbers 
Model.enforce_index_order()    
#alternatively if you know the precise spacing (ie; 1, 2, 4, indicates that section 3 has been left out of the series) Then you can use      
#Furthermore if you know the exact section thickness in microns this can be included instead of None
#if your sections are numbered rostral to caudal you will need to specify a negative section_thickness      
Model.enforce_index_spacing(section_thickness = None)
#now we save which will produce a json file which can be placed in the same directory as your images and then opened with QuickNII. 
Model.save_predictions(folderpath + 'MyResults')                                                                                                             

Acknowledgements

We are grateful to Ann Goodchild for her time-saving blunt assessments of many failed prototypes, for the motivation provided by Dr William Redmond, and especially to Veronica Downs, Freja Warner Van Dijk and Jayme McCutcheon, whose Novice alignments were instrumental to this work. We would like to thank Gergely CsΓΊcs for providing his expertise and many atlasing tools. Work in the authors’ laboratories is supported by the National Health & Medical Research Council of Australia, the Hillcrest Foundation, and Macquarie University (SMcM), and from the European Union’s Horizon 2020 Framework Program for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3) and the Research Council of Norway under Grant Agreement No. 269774 (INCF, JGB). We are grateful to Macquarie University for access to their HPC resources, essential for production of early DeepSlice prototypes.

deepslice's People

Contributors

polarbean avatar thecobb avatar thermodev avatar wjguan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepslice's Issues

Won’t install: ResolvePackageNotFound

Running the command conda env create -f DS-CPU.yml just results in

Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound: 
  - openjpeg==2.3.0=h5ec785f_1
  - pillow==8.1.0=py37h4fa10fc_0
  - wincertstore==0.2=py37_0
  - mkl-service==2.3.0=py37h196d8e1_0
  - python==3.7.9=h60c2a47_0
  - vc==14.2=h21ff451_1
  - setuptools==52.0.0=py37haa95532_0
  - scipy==1.6.0=py37h14eb087_0
  - libprotobuf==3.14.0=h23ce68f_0
  - snappy==1.1.8=h33f27b4_0
  - termcolor==1.1.0=py37haa95532_1
  - protobuf==3.14.0=py37hd77b12b_1
  - astor==0.8.1=py37haa95532_0
  - openssl==1.1.1i=h2bbff1b_0
  - giflib==5.2.1=h62dcd97_0
  - mkl_random==1.1.1=py37h47e9c7a_0
  - pyyaml==5.4.1=py37h2bbff1b_1
  - lcms2==2.11=hc51a39a_0
  - tensorflow==1.15.0=eigen_py37h9f89a44_0
  - zlib==1.2.11=h62dcd97_4
  - kiwisolver==1.3.1=py37hd77b12b_0
  - libzopfli==1.0.3=ha925a31_0
  - matplotlib-base==3.3.4=py37h49ac443_0
  - h5py==2.10.0=py37h5e291fa_0
  - lz4-c==1.9.3=h2bbff1b_0
  - jpeg==9b=hb83a4c4_2
  - vs2015_runtime==14.27.29016=h5e58377_2
  - zstd==1.4.5=h04227a9_0
  - tornado==6.1=py37h2bbff1b_0
  - markdown==3.3.3=py37haa95532_0
  - sqlite==3.33.0=h2a8f88b_0
  - mkl_fft==1.2.0=py37h45dec08_0
  - scikit-learn==0.23.2=py37h47e9c7a_0
  - zfp==0.5.5=hd77b12b_4
  - pandas==1.2.1=py37hf11a4ad_0
  - icc_rt==2019.0.0=h0cc432a_1
  - scikit-image==0.17.2=py37h1e1f486_0
  - xz==5.2.5=h62dcd97_0
  - yaml==0.2.5=he774522_0
  - brotli==1.0.9=ha925a31_2
  - libtiff==4.1.0=h56a325e_1
  - numpy==1.19.2=py37hadc3359_0
  - hdf5==1.10.4=h7ebc959_0
  - libaec==1.0.4=h33f27b4_1
  - ca-certificates==2021.1.19=haa95532_0
  - libpng==1.6.37=h2a8f88b_0
  - freetype==2.10.4=hd328e21_0
  - lerc==2.2.1=hd77b12b_0
  - charls==2.1.0=h33f27b4_2
  - bzip2==1.0.8=he774522_0
  - pip==20.3.3=py37haa95532_0
  - certifi==2020.12.5=py37haa95532_0
  - ipython==7.20.0=py37hd4e2768_1
  - libdeflate==1.7=h2bbff1b_5
  - imagecodecs==2021.1.11=py37h5da4933_1
  - blosc==1.21.0=h19a0ad4_0
  - tk==8.6.10=he774522_0
  - tensorflow-base==1.15.0=eigen_py37h07d2309_0
  - wrapt==1.12.1=py37he774522_1
  - six==1.15.0=py37haa95532_0
  - cytoolz==0.11.0=py37he774522_0
  - pywavelets==1.1.1=py37he774522_2
  - grpcio==1.35.0=py37hc60d5dd_0
  - numpy-base==1.19.2=py37ha3acd2a_0
  - pyreadline==2.1=py37_1

Alignment Problems

Our lab is very excited about the prospects of much faster atlas registration. However, I've tried on the client-side and using the web platform to auto-align images but unfortunately the model performed quite poorly on my DAPI images. Even when providing section number information I get alignment like this (see images ) for all my images. Not one image aligned well. I am hoping there is a fix for this and that I am just doing something wrong. Please advise.

In addition, the QuickNII software may be buggy as well. The same image pops up or does not change when you change images, e.g. DeepSlice Alignment2.jpg and 3 should be two different images but the DAPI image remains the same when selecting another image from the series or just never shows up at all (DeepSlice Alignment4.jpg) despite restarting the application many times.

DeepSlice Alignment1
DeepSlice Alignment1
DeepSlice Alignment2
DeepSlice Alignment2
DeepSlice Alignment3
DeepSlice Alignment3
DeepSlice Alignment4
DeepSlice Alignment4

cannot import DeepSlice

After setting up the envs using DS-GPU.yml, I cannot run 'from DeepSlice import DeepSlice'. In fact, I did not find 'DeepSlice' packages under the '...\anaconda3\envs\DS-GPU\Lib\site-packages' folder. I also did not find 'DeepSlice' listed in the DS-GPU.yml file.

DeepSlice Web won't run more than 25 images

I uploaded 68 images to DeepSlice Web and once it hits 25, it just gets stuck. If I press reload, I get an xml file with 1-25 and #68, but nothing between 25-68. Is there a capacity limit? I have more than 25 scenes of the brain I'm trying to upload.

Can't align the sections with Allen CCF

Hi there,

I used serial slices AP from -0.46 to -3.5, 50 γŽ›, in total 69 slices as the input of DeepSlice. However, the result is a bit weird.
I checked the XML file in the QuickNII, it shows a huge difference in linear and polinear results, and looks like most of the slices have been mapped to the same ID in Allen CCF. I am not sure what's the issue.

Enclosed with the log file, the output file, samples of slices, and the screenshot of QuickNII.
69

DeepSlice_CA2CA3EC_Output.csv
DeepSlice_Output
QuickNII-Result-Bad1
QuickNII-Result-Bad

DeepSlice_log.txt
DeepSlice_Log

I appreciate any comments or suggestions. Looking forward to hearing from you.

Invalid Namespace URI

When I reach the step of saving the .xml file in the ReadMe (Model.save_predictions(folderpath + 'MyResults') , I get the following error message. Has anyone else experienced this or know how to fix it? Thank you!

`---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[9], line 1
----> 1 Model.save_predictions(folderpath + 'MyResults')

File ~\conda\envs\envdeepslice\Lib\site-packages\DeepSlice\main.py:198, in DSModel.save_predictions(self, filename)
194 self.predictions.to_csv(filename + ".csv", index=False)
195 QuickNII_functions.write_QUINT_JSON(
196 df=self.predictions, filename=filename, aligner=aligner, target=target
197 )
--> 198 QuickNII_functions.write_QuickNII_XML(
199 df=self.predictions, filename=filename, aligner=aligner
200 )

File ~\conda\envs\envdeepslice\Lib\site-packages\DeepSlice\read_and_write\QuickNII_functions.py:45, in write_QuickNII_XML(df, filename, aligner)
17 out_df = pd.DataFrame(
18 {
19 "anchoring": "ox="
(...)
41 }
42 )
43 print(f"saving to {filename}.xml")
---> 45 out_df.to_xml(
46 filename + ".xml",
47 index=False,
48 root_name="series",
49 row_name="slice",
50 attr_cols=list(out_df.columns),
51 namespaces={
52 "first": df_temp.nr.values[0],
53 "last": df_temp.nr.values[-1],
54 "name": filename,
55 "aligner": aligner,
56 "": "",
57 },
58 )

File ~\conda\envs\envdeepslice\Lib\site-packages\pandas\util_decorators.py:333, in deprecate_nonkeyword_arguments..decorate..wrapper(*args, **kwargs)
327 if len(args) > num_allow_args:
328 warnings.warn(
329 msg.format(arguments=_format_argument_list(allow_args)),
330 FutureWarning,
331 stacklevel=find_stack_level(),
332 )
--> 333 return func(*args, **kwargs)

File ~\conda\envs\envdeepslice\Lib\site-packages\pandas\core\frame.py:3643, in DataFrame.to_xml(self, path_or_buffer, index, root_name, row_name, na_rep, attr_cols, elem_cols, namespaces, prefix, encoding, xml_declaration, pretty_print, parser, stylesheet, compression, storage_options)
3622 raise ValueError("Values for parser can only be lxml or etree.")
3624 xml_formatter = TreeBuilder(
3625 self,
3626 path_or_buffer=path_or_buffer,
(...)
3640 storage_options=storage_options,
3641 )
-> 3643 return xml_formatter.write_output()

File ~\conda\envs\envdeepslice\Lib\site-packages\pandas\io\formats\xml.py:338, in _BaseXMLFormatter.write_output(self)
336 @Final
337 def write_output(self) -> str | None:
--> 338 xml_doc = self._build_tree()
340 if self.path_or_buffer is not None:
341 with get_handle(
342 self.path_or_buffer,
343 "wb",
(...)
346 is_text=False,
347 ) as handles:

File ~\conda\envs\envdeepslice\Lib\site-packages\pandas\io\formats\xml.py:464, in LxmlXMLFormatter._build_tree(self)
452 """
453 Build tree from data.
454
455 This method initializes the root and builds attributes and elements
456 with optional namespaces.
457 """
458 from lxml.etree import (
459 Element,
460 SubElement,
461 tostring,
462 )
--> 464 self.root = Element(f"{self.prefix_uri}{self.root_name}", nsmap=self.namespaces)
466 for d in self.frame_dicts.values():
467 elem_row = SubElement(self.root, f"{self.prefix_uri}{self.row_name}")

File src\lxml\etree.pyx:3092, in lxml.etree.Element()

File src\lxml\apihelpers.pxi:138, in lxml.etree._makeElement()

File src\lxml\apihelpers.pxi:125, in lxml.etree._makeElement()

File src\lxml\apihelpers.pxi:222, in lxml.etree._setNodeNamespaces()

File src\lxml\apihelpers.pxi:1752, in lxml.etree._uriValidOrRaise()

ValueError: Invalid namespace URI`

XML not saving from anaconda prompt

Hi, thank you for making DeepSlice, it's been very handy!
I ran DeepSlice through anaconda prompt and it was able to save the csv and the json file but the xml file does not save. When I go to open the json file in QuickNii the original image is not displayed under the atlas predictions. Ideally I would like to do a manual check in quicknii where I compare the original image to the DeepSlice atlas prediction and make adjustments as necessary. The predictions are what I would expect but I can't easily verify that without seeing the original image underneath the atlas.
I tried running images in the web interface and saved all three files (csv, json, xml) then opened the xml in quicknii. It opens and I can see the images underneath the atlas but the atlas predictions are not accurate at all. The json saved from the web application has the same issue. I'm not sure on which end something is wrong (deepslice or quicknii) because I've successfully used this workflow in the past for a different project.
Thank you in advance for your help!

Solving environment: failed

#When I created DS-CPU or DS-GPU, always got "Solving environment: failed".
#Could you please help me to solve this issue?
#MacBook Air M1

conda_environments % conda env create -f DS-GPU.yml
Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound:

  • setuptools==61.2.0=py37haa95532_0
  • certifi==2022.5.18.1=py37haa95532_0
  • wincertstore==0.2=py37haa95532_2
  • openssl==1.1.1o=h2bbff1b_0
  • vs2015_runtime==14.27.29016=h5e58377_2
  • sqlite==3.38.3=h2bbff1b_0
  • vc==14.2=h21ff451_1
  • pip==21.2.4=py37haa95532_0
  • cudnn==7.6.0=cuda10.0_0
  • python==3.7.13=h6244533_0
  • ca-certificates==2022.4.26=haa95532_0
  • cudatoolkit==10.0.130=0

IndexError when activating the normalization of the angles

Hi, I activated the propagate_angles option using the basic example you posted on this git repo, but whatever the number of slices I have in my input folder, I always get an IndexError such as:
Traceback (most recent call last): File "deepslice.py", line 13, in <module> Model.propagate_angles() File "/home/piluso/miniconda3/envs/deepslice/lib/python3.7/site-packages/DeepSlice/main.py", line 131, in propagate_angles self.predictions, method, self.species File "/home/piluso/miniconda3/envs/deepslice/lib/python3.7/site-packages/DeepSlice/coord_post_processing/angle_methods.py", line 112, in propagate_angles DV_angle_list, ML_angle_list, method, depths, species File "/home/piluso/miniconda3/envs/deepslice/lib/python3.7/site-packages/DeepSlice/coord_post_processing/angle_methods.py", line 88, in get_mean_angle weighted_accuracy = [weighted_accuracy[int(y)] for y in df_center] File "/home/piluso/miniconda3/envs/deepslice/lib/python3.7/site-packages/DeepSlice/coord_post_processing/angle_methods.py", line 88, in <listcomp> weighted_accuracy = [weighted_accuracy[int(y)] for y in df_center] IndexError: index 529 is out of bounds for axis 0 with size 528
Could you please help me solving this issue?

Add DeepSlice to the conda-forge repository

Now that Deepslice is available to PyPI, could you look into uploading it to conda-forge? I'm not sure how much work it would take to set up, but it would make my life a bit easier.

No such file error Deepslice_cli_v1.1.5.1.py

I'm trying to run Deepslice in combination with the GUI of ABBA. When loading sections I get the following error.
Can anyone advise on how to add the deepslice_cli_v1.1.5.1.py file as it's not contained in the downloaded files?

java.nio.file.NoSuchFileException: C:\Users\username.conda\envs\deepslice\deepslice_cli_v1.1.5.1.py
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(WindowsFileSystemProvider.java:230)
at java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
at java.nio.file.Files.newOutputStream(Files.java:216)
at java.nio.file.Files.copy(Files.java:3016)
at ch.epfl.biop.wrappers.deepslice.DeepSlice.ensureScriptIsCopied(DeepSlice.java:77)
at ch.epfl.biop.wrappers.deepslice.DeepSlice.execute(DeepSlice.java:90)
at ch.epfl.biop.wrappers.deepslice.DeepSlice.execute(DeepSlice.java:177)
at ch.epfl.biop.wrappers.BiopWrappersCheck.isDeepSliceSet(BiopWrappersCheck.java:70)
at ch.epfl.biop.wrappers.deepslice.ij2commands.DeepSlicePrefsSet.run(DeepSlicePrefsSet.java:36)
at org.scijava.command.CommandModule.run(CommandModule.java:196)
at org.scijava.module.ModuleRunner.run(ModuleRunner.java:165)
at org.scijava.module.ModuleRunner.call(ModuleRunner.java:125)
at org.scijava.module.ModuleRunner.call(ModuleRunner.java:64)
at org.scijava.thread.DefaultThreadService.lambda$wrap$2(DefaultThreadService.java:247)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Could not copy CLI script: C:\Users\username.conda\envs\deepslice\deepslice_cli_v1.1.5.1.py
Please try to add the script file manually
The CLI script to DeepSlice could not be copied to the env folder (C:\Users\username.conda\envs\deepslice)
You can try to copy manually deepslice_cli_v1.1.5.1.py to this folder

Update tensorflow ?

Hello !

I'm trying to see if I can update some dependencies of abba_python (pyimagej, python version, etc.) but DeepSlice relies on tensorflow 1.15 and this seems to be the bottleneck here.

I know that such an update is complicated. Do you plan to update tensorflow at some point, or is it such a pain that this won't happen ?

If this will not happen, I'll probably try (one day) to go for a client-(local) server architecture.

Cheers,

Nico

Predict

Hi there,

When I tried the predict() , I got this error:

"DeepSlice.predict('D:\DeepSlice\Brain slices', prop_angles=False)"

Looking forward to hearing from you.

DeepSlice web does not seem to upload images

Hi @PolarBean

Thanks again for setting up DeepSlice. I'm not sure if the website is still working. Uploads that were almost instantaneous last year (a few tens of kb per picture) seem to be stuck forever now. Is there an issue or should I just wait longer?

Fix for Tensorflow 2.xx load_model bug

The model is unable to load_weights in tensorflow 2.xx versions because of the removal of the _layers attribute of keras Models.

The base_model._layers.pop() call in the original code only cosmetically removes the last two layers in the Xception model, but during the actual execution of the graph, these last two layers (the AveragePooling and Softmax layers) are still being performed. Thus, the output shape of the Xception layer in the overall DSModel.model is still (1000,) and correctly interfaces with the rest of the model. However, if you try to access base_model.layers or even base_model.summary(), these last two layers are hidden and nowhere to be seen.

My workaround is, instead of using model.load_weights(), we must manually set each of the layers using model.layers[idx].set_weights(list_of_numpy_weights) after loading the weights in with h5py. I wrote this function to be included in the neural_network.py module. It should be called whenever model.load_weights() was being called:

def load_xception_weights(model, weights):
    with h5py.File(weights, "r") as new:
        # set weight of each layer manually
        model.layers[1].set_weights([new["dense"]["dense"]["kernel:0"], new["dense"]["dense"]["bias:0"]])
        model.layers[2].set_weights([new["dense_1"]["dense_1"]["kernel:0"], new["dense_1"]["dense_1"]["bias:0"]])
        model.layers[3].set_weights([new["dense_2"]["dense_2"]["kernel:0"], new["dense_2"]["dense_2"]["bias:0"]])

        # Set the weights of the xception model 
        weight_names = new["xception"].attrs["weight_names"].tolist()
        weight_names_layers = [name.decode("utf-8").split("/")[0] for name in weight_names]

        for i in range(len(model.layers[0].layers)):
            name_of_layer = model.layers[0].layers[i].name
            # if layer name is in the weight names, then we will set weights
            if name_of_layer in weight_names_layers:
                # Get name of weights in the layer
                layer_weight_names = []
                for weight in model.layers[0].layers[i].weights:
                    layer_weight_names.append(weight.name.split("/")[1])
                h5_group = new["xception"][name_of_layer]
                weights_list = [np.array(h5_group[kk]) for kk in layer_weight_names]
                model.layers[0].layers[i].set_weights(weights_list)
    return model

I can make a pull request with the full fix if you'd like.

deepslice.com.au seems to be down

The page reads:

Something went wrong :-(
Something went wrong while trying to load this website; please try again later.
If it is your site, you should check your logs to determine what the problem is.

Client-side-generated .xml file: folder duplication in image file paths

As I was testing different things in for a previous issue, I noticed that the client-side-generated .xml files were the ones that did not show the images in QuickNII. If you look at the file path in the top of the window of image 2 below and compare it to the same in image 1 you will see it too.

Image1 (Online-Generated)
OnlineJPGAlignment

Image2 (Client-Side-Generated)
ClientSideJPGAlignment4

The "ch1_DAPI" folder is listed twice in the path for the one generated on the client side. Consequently, the images dont get loaded into QuickNII for comparison to the atlas. I originally saved the output of the model to the parent folder ("Images") and tried saving it to the child folder with the images to no avail. The image file paths remained containing the duplicate folder. I can open a separate bug report for this if you'd like.

Kernel crashes when running "from DeepSlice import DSModel"

Hi, I installed DeepSlice and opened the example notebook in my default browser (Google Chrome) from terminal. The first few steps don't give me any issue, but when I run the import model step however, I get this error: "The kernel appears to have died. It will restart automatically." If I run the next step, I am told that "NameError: name 'DSModel' is not defined," indicating that the import step failed.

Is there a workaround for this, besides relying on the DeepSlice web version?
Thanks in advance for any advice!

Got wrong results from DS-GPU 1.1.2 on my own PC / Same output for all sections

Hey, thanks for developing such powerful tools. I know the website https://www.deepslice.com.au/ has provided an easy way to align the slices. Yet, I have to use it offline sometimes. So I installed DS-GPU 1.1.2 in my own PC. To validate it, I aligned 35 slices in GLTa folder with DS-GPU, which gave a wrong result as attached. All the depths predicted were around 438. However, when uploading the same 35 slices to the web interface, I got the right result, with depths ranging from 48 to 422.
Could you please help me with the troubleshooting?
Thank you again.

BTW, the s104 slice has been renamed as 641_2002_2568_NM01_s104, whose original name is 641_2002_2567_NM01_s104.

image

DS-GPU-Results.csv

Deepslice results only produce atlas images without histology overlay when opening json file in Quicknii

After downloading the json/xml file from the web version of deepslice, I then try to open in Quicknii to make minor adjustments, but I when loading the json/xml file all that shows up are the atlas images and no histology. Similarly, if I try to open the file in visualign I only get an error message in the terminal. I've made sure the histology images are numbered correctly, have fixed widths of 1500 pixels, and have the json/xml file in the same folder so they should be formatted correctly for Quicknii. I tried deleting and redownloading Quicknii to no avail. Using the Quicknii filebuilder on the same files i supplied to Deepslice works as normal, and the histology images show up in Quicknii. Any idea what could be causing this issue?

save the results

@PolarBean Thank you for developing deepslice tool for brain image registration.

I managed to install the tool on Ubuntu 20.04. However when I tested the following script, I got error. Can you help?

from DeepSlice import DeepSlice 
Model = DeepSlice()
Model.Build()
Model.predict("/home/fgao/deepslice_registration/")

PIL.Image.DecompressionBombError: Image size (700105140 pixels) exceeds limit of 178956970 pixels, could be decompression bomb DOS attack.

I used the following script to fix the above problem.

from PIL import Image
Image.MAX_IMAGE_PIXELS = None
Model.predict("/home/fgao/deepslice_registration/")
Model.Save_Results("deepslice_models")

Then, the following error came out:
Traceback (most recent call last):
File "", line 1, in
File "/home/fgao/software/DeepSlice/DeepSlice.py", line 490, in Save_Results
width, height = get_image_size(self.image_dir + os.path.sep + file)
File "/home/fgao/software/DeepSlice/DeepSlice.py", line 61, in get_image_size
raise Exception(f"Invalid filetype: {head}")
Exception: Invalid filetype: b'II*\x00\x08\x00\x00\x00\x0e\x00\x00\x01\x04\x00\x01\x00\x00\x00[g\x00\x00\x01\x01'

Cannot pip install DeepsSlice

Hi, I'm trying to install DeepSlice using pip but am getting this specific error:

"ERROR: Cannot install deepslice==1.0.0, deepslice==1.0.1, deepslice==1.0.11, deepslice==1.0.2, deepslice==1.0.3, deepslice==1.0.4, deepslice==1.0.5, deepslice==1.0.6, deepslice==1.1.0, deepslice==1.1.1, deepslice==1.1.2, deepslice==1.1.3, deepslice==1.1.4, deepslice==1.1.5 and deepslice==1.1.6 because these package versions have conflicting dependencies.

The conflict is caused by:
deepslice 1.1.6 depends on tensorflow==1.15.0
deepslice 1.1.5 depends on tensorflow==1.15.0
deepslice 1.1.4 depends on tensorflow==1.15.0
deepslice 1.1.3 depends on tensorflow==1.15.0
deepslice 1.1.2 depends on tensorflow==1.15.0
deepslice 1.1.1 depends on tensorflow==1.15.0
deepslice 1.1.0 depends on tensorflow==1.15.0
deepslice 1.0.11 depends on tensorflow==1.15.0
deepslice 1.0.6 depends on tensorflow==1.15.0
deepslice 1.0.5 depends on tensorflow==1.15.0
deepslice 1.0.4 depends on tensorflow==1.15.0
deepslice 1.0.3 depends on tensorflow==1.15.0
deepslice 1.0.2 depends on tensorflow==1.15.0
deepslice 1.0.1 depends on tensorflow==1.15.0
deepslice 1.0.0 depends on tensorflow==1.15.0

To fix this you could try to:

  1. loosen the range of package versions you've specified
  2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts"

Has anyone ran into this error and what have you done to get around this issue?

Thanks!

Hemi-brains

Dear DeepSlice team,

I will be very happy to test DeepSlice. Is it however possible to upload coronal slices of a single hemisphere (brain cut in two)? Or is it only intended for full coronal sections?

Thanks a lot,
Best,
Amandine

Local run of deepslice performs poorly

I have a dataset that was quite well registered by the web based DeepSlice. When I try to clone the repository and run the DeepSlice_example.ipynb with my dataset, I get a very poor registration. Can it be because the arguments are slightly different or is it the trained model that is different?
For the arguments are:

  • Normalise section angles equivalent to Model.predict(prop_angles=True)
  • Slower but .. is wise=True
  • Is huber used in the web version?

Thanks!

Compatibility Issue with DeepSlice Output and QuickNII: Unable to Load JSON Files

Hello,

I've been successfully running DeepSlice via Conda and it works great. However, I've encountered an issue with the workflow integration. The prediction results from DeepSlice are generated in CSV and JSON formats. Unfortunately, the version of QuickNII I am using (QuickNII-ABAMouse-v3-2017 for macOS) in the "Manage Data" section does not support loading JSON files.

Currently, I am able to open the JSON files directly with VisuAlign, but I would prefer to use QuickNII for this step in my workflow. Is there a recommended solution for this issue? Would converting the JSON files to XML be advisable, or is there a possibility to include an XML export option in DeepSlice’s output?

Any advice or guidance on this matter would be greatly appreciated.

Thank you!

Conflicting dependencies

Dear DeepSlice Team

Installation process failed on my ubuntu 20.04LTS due to conflicting dependencies using pip:

ERROR: Cannot install deepslice==1.0.0, deepslice==1.0.1, deepslice==1.0.11, deepslice==1.0.2, deepslice==1.0.3, deepslice==1.0.4, deepslice==1.0.5, deepslice==1.0.6, deepslice==1.1.0 and deepslice==1.1.1 because these package versions have conflicting dependencies.

The conflict is caused by:
deepslice 1.1.1 depends on tensorflow==1.15.0
deepslice 1.1.0 depends on tensorflow==1.15.0
deepslice 1.0.11 depends on tensorflow==1.15.0
deepslice 1.0.6 depends on tensorflow==1.15.0
deepslice 1.0.5 depends on tensorflow==1.15.0
deepslice 1.0.4 depends on tensorflow==1.15.0
deepslice 1.0.3 depends on tensorflow==1.15.0
deepslice 1.0.2 depends on tensorflow==1.15.0
deepslice 1.0.1 depends on tensorflow==1.15.0
deepslice 1.0.0 depends on tensorflow==1.15.0

problem with Jupyter notebook

Hi, I am trying out the notebook. However, got an error at the step of building the model:
from DeepSlice import DeepSlice


ModuleNotFoundError Traceback (most recent call last)
in ()
----> 1 from DeepSlice import DeepSlice

ModuleNotFoundError: No module named 'DeepSlice'

Any idea what I am doing wrong?

SSL module is not available

Hi,
I installed the environment as specified (tried both from PyPi and from source with gpu) and I am getting an error really at the start and not sure how to fix it.

The error is is the line for class instance ->
Model = DSModel(species)

the model can not reload and more importantly - download the weights.

The error:

Exception has occurred: SSLError
HTTPSConnectionPool(host='data-proxy.ebrains.eu', port=443): Max retries exceeded with url: /api/v1/buckets/deepslice/weights/xception_weights_tf_dim_ordering_tf_kernels.h5 (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available."))
urllib3.exceptions.SSLError: Can't connect to HTTPS URL because the SSL module is not available.

During handling of the above exception, another exception occurred:

urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='data-proxy.ebrains.eu', port=443): Max retries exceeded with url: /api/v1/buckets/deepslice/weights/xception_weights_tf_dim_ordering_tf_kernels.h5 (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available."))

During handling of the above exception, another exception occurred:

  File "C:\Users\Jackie.MEDICINE\Desktop\DeepSlice_codes\DeepSlice\metadata\metadata_loader.py", line 30, in download_file
    r = requests.get(url, allow_redirects=True)
  File "C:\Users\Jackie.MEDICINE\Desktop\DeepSlice_codes\DeepSlice\metadata\metadata_loader.py", line 45, in get_data_path
    download_file(url_path_dict["url"], path + url_path_dict["path"])
  File "C:\Users\Jackie.MEDICINE\Desktop\DeepSlice_codes\DeepSlice\main.py", line 19, in __init__
    xception_weights =   metadata_loader.get_data_path(self.config["weight_file_paths"]["xception_imagenet"], self.metadata_path)
  File "C:\Users\Jackie.MEDICINE\Desktop\DeepSlice_codes\runner.py", line 20, in <module>
    Model = DSModel(species)
requests.exceptions.SSLError: HTTPSConnectionPool(host='data-proxy.ebrains.eu', port=443): Max retries exceeded with url: /api/v1/buckets/deepslice/weights/xception_weights_tf_dim_ordering_tf_kernels.h5 (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available."))

Did someone also got this error? any advice of how to solve it?

Supporting tiff

Hi when I read the documentation/guide it says DS supports tiff but when it won't recognize them when I actually ran the code. Besides manually converting the files, is there any other solution? Thank you!

Cannot open the web DeepSlice

Am I the only person who recently cannot open the web DeepSlice?

It said:
"Something went wrong :-(
Something went wrong while trying to load this website; please try again later.

If it is your site, you should check your logs to determine what the problem is."

OpenSSL issue

Hello!

I've recently experienced this issue when trying to use DeepSlice 1.1.6 (I've got the same error with 1.1.5):

from DeepSlice.read_and_write import QuickNII_functions
  File "C:\ProgramData\miniforge3\envs\deepslice\lib\site-packages\DeepSlice\__init__.py", line 1, in <module>
    from .main import DSModel
  File "C:\ProgramData\miniforge3\envs\deepslice\lib\site-packages\DeepSlice\main.py", line 5, in <module>
    from .metadata import metadata_loader
  File "C:\ProgramData\miniforge3\envs\deepslice\lib\site-packages\DeepSlice\metadata\metadata_loader.py", line 3, in <module>
    import requests
  File "C:\ProgramData\miniforge3\envs\deepslice\lib\site-packages\requests\__init__.py", line 43, in <module>
    import urllib3
  File "C:\ProgramData\miniforge3\envs\deepslice\lib\site-packages\urllib3\__init__.py", line 42, in <module>
    "urllib3 v2.0 only supports OpenSSL 1.1.1+, currently "
ImportError: urllib3 v2.0 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'OpenSSL 1.1.0i  14 Aug 2018'. See: https://github.com/urllib3/urllib3/issues/2168

Do you know what's wrong and how to fix that ?

FYI, I'm on windows 11, I've started by installing miniforge, updated conda, and then performed the following:

conda create -n deepslice python=3.7
conda activate deepslice
conda install pip
pip install DeepSlice==1.1.6

"conda env create -f DS-CPU.yml" leads to "Solving environment: failed" & "ResolvePackageNotFound:"

Hi Master of DeepSlicing,

I followed the installation you mentioned and I am facing the following failure :

(envDS) Path\DeepSlice\conda_environments>conda env create -f DS-CPU.yml
Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound:

  • lz4-c==1.9.3=h295c915_1
  • mkl_random==1.2.2=py37h51133e4_0
  • sniffio==1.2.0=py37h06a4308_1
  • git==2.34.1=pl5262hc120c5b_0
  • grpcio==1.42.0=py37hce63b2e_0
  • openssl==1.1.1o=h7f8727e_0
  • blinker==1.4=py37h06a4308_0
  • libxml2==2.9.12=h03d6c58_0
  • tornado==6.1=py37h27cfd23_0
  • pip==21.2.2=py37h06a4308_0
  • libzopfli==1.0.3=he6710b0_0
  • yaml==0.2.5=h7b6447c_0
  • libgomp==9.3.0=h5101ec6_17
  • wrapt==1.13.3=py37h7f8727e_2
  • cffi==1.15.0=py37hd667e15_1
  • libcurl==7.80.0=h0b77cf5_0
  • astor==0.8.1=py37h06a4308_0
  • freetype==2.11.0=h70c0345_0
  • imagecodecs==2021.8.26=py37h4cda21f_0
  • hdf5==1.10.6=hb1b8bf9_0
  • libffi==3.3=he6710b0_2
  • icu==58.2=he6710b0_3
  • lxml==4.8.0=py37h1f438cf_0
  • nbconvert==6.3.0=py37h06a4308_0
  • snappy==1.1.8=he6710b0_0
  • mkl-service==2.4.0=py37h7f8727e_0
  • matplotlib-base==3.5.1=py37ha18d171_0
  • bzip2==1.0.8=h7b6447c_0
  • tensorflow-base==2.2.0=mkl_py37hd506778_0
  • cytoolz==0.11.0=py37h7b6447c_0
  • importlib-metadata==4.8.2=py37h06a4308_0
  • _openmp_mutex==4.5=1_gnu
  • lcms2==2.12=h3be6417_0
  • libedit==3.1.20210910=h7f8727e_0
  • libdeflate==1.8=h7f8727e_5
  • zeromq==4.3.4=h2531618_0
  • bottleneck==1.3.4=py37hce1f21e_0
  • jupyter_core==4.9.1=py37h06a4308_0
  • pyyaml==6.0=py37h7f8727e_1
  • libstdcxx-ng==9.3.0=hd4cf53a_17
  • readline==8.1.2=h7f8727e_1
  • pyrsistent==0.18.0=py37heee7806_0
  • brunsli==0.1=h2531618_0
  • charls==2.2.0=h2531618_0
  • libssh2==1.9.0=h1ba5d50_1
  • ld_impl_linux-64==2.35.1=h7274673_9
  • libgcc-ng==9.3.0=h5101ec6_17
  • websocket-client==0.58.0=py37h06a4308_4
  • cryptography==36.0.0=py37h9ce1e76_0
  • argon2-cffi-bindings==21.2.0=py37h7f8727e_0
  • debugpy==1.5.1=py37h295c915_0
  • markupsafe==2.0.1=py37h27cfd23_0
  • curl==7.80.0=h7f8727e_0
  • kiwisolver==1.3.2=py37h295c915_0
  • scipy==1.7.3=py37hc147768_0
  • sqlite==3.37.2=hc218d9a_0
  • brotlipy==0.7.0=py37h27cfd23_1003
  • xz==5.2.5=h7b6447c_0
  • libtiff==4.1.0=h2733197_1
  • pywavelets==1.1.1=py37h7b6447c_2
  • libev==4.33=h7f8727e_1
  • libsodium==1.0.18=h7b6447c_0
  • giflib==5.2.1=h7b6447c_0
  • openjpeg==2.4.0=h3ad879b_0
  • notebook==6.4.8=py37h06a4308_0
  • gettext==0.21.0=hf68c758_0
  • perl==5.26.2=h14c3975_0
  • blosc==1.21.0=h8c45485_0
  • cfitsio==3.470=hf0d0db6_6
  • mkl_fft==1.3.1=py37hd3c417c_0
  • pcre2==10.37=he7ceb23_1
  • libprotobuf==3.19.1=h4ff587b_0
  • c-ares==1.18.1=h7f8727e_0
  • zstd==1.4.9=haebb681_0
  • aiohttp==3.8.1=py37h7f8727e_0
  • libwebp==1.2.0=h89dd481_0
  • zfp==0.5.5=h295c915_6
  • libnghttp2==1.46.0=hce63b2e_0
  • ipykernel==6.4.1=py37h06a4308_1
  • frozenlist==1.2.0=py37h7f8727e_0
  • libgfortran4==7.5.0=ha8ba4b0_17
  • libaec==1.0.4=he6710b0_1
  • mistune==0.8.4=py37h14c3975_1001
  • locket==0.2.1=py37h06a4308_1
  • pyzmq==22.3.0=py37h295c915_2
  • ncurses==6.3=h7f8727e_2
  • h5py==2.10.0=py37hd6299e0_1
  • tk==8.6.11=h1ccaba5_0
  • ca-certificates==2022.4.26=h06a4308_0
  • tensorflow==2.2.0=mkl_py37h6e9ce2d_0
  • anyio==3.5.0=py37h06a4308_0
  • termcolor==1.1.0=py37h06a4308_1
  • protobuf==3.19.1=py37h295c915_0
  • mkl==2021.4.0=h06a4308_640
  • python==3.7.11=h12debd9_0
  • scikit-learn==0.23.2=py37h0573a6f_0
  • zlib==1.2.11=h7f8727e_4
  • ipython==7.31.1=py37h06a4308_0
  • jpeg==9d=h7f8727e_0
  • libxslt==1.1.34=hc22bd24_0
  • numexpr==2.8.1=py37h6abb31d_0
  • expat==2.4.4=h295c915_0
  • pillow==8.4.0=py37h5aabda8_0
  • libgfortran-ng==7.5.0=ha8ba4b0_17
  • setuptools==58.0.4=py37h06a4308_0
  • scikit-image==0.17.2=py37hdf5156a_0
  • intel-openmp==2021.4.0=h06a4308_3561
  • lerc==3.0=h295c915_0
  • markdown==3.3.4=py37h06a4308_0
  • certifi==2022.5.18.1=py37h06a4308_0
  • terminado==0.13.1=py37h06a4308_0
  • multidict==5.2.0=py37h7f8727e_2
  • libpng==1.6.37=hbc83047_0
  • jxrlib==1.1=h7b6447c_2
  • krb5==1.19.2=hac12032_0
  • brotli==1.0.9=he6710b0_2
  • yarl==1.6.3=py37h27cfd23_0

Do you have any tips ?
Cheers,

VK

DeepSlice web gives much better results than the offline version

Hi, thanks for making DeepSlice and making it available for the wider community. When I initially stumbled across DeepSlice, I tried out the web version and was quite impressed with the registration results. I have since tried to replicate the results by running DeepSlice offline/locally but with no success. The model runs with no issues, the output XML/JSON file structure seems fine and imports fine into Quicknii, but the registration is clearly garbage.

For example:

Autofluorescence image of slice 1 (of 11)

brain_slice_s001

Offline result:

brain_slice_s001-STPt_avg 2015

Online result (for comparison):

brain_slice_s001-STPt_avg 2015

Any pointers towards potential issues or what to try next would be much appreciated.


Additional information

Script used to create the offline results

from DeepSlice import DSModel

folder = "/path/to/images/"
Model = DSModel("mouse")
Model.predict(folder, ensemble=True, section_numbers=True)
Model.enforce_index_spacing(section_thickness=150)
Model.propagate_angles()
Model.save_predictions(folder + 'deepslice_results')

Settings used for getting the online results (=all boxes ticked)

Screenshot from 2023-09-20 13-04-53

Additional files

offline results XML

online results XML

original images

Upload the half photo on DeepSlice

Hi,Harry,
I have another question about the DeepSlice.
Because of the time-consuming of one confocal photo for whole brain. is it necessary to take the photo for the whole slice? Or I can just scan half of the brain slice especially in the coronal orientation. It can directly save half time. But seemly that I cannot upload photos like that.
May be it can be renewed. Or It is my fault not finding the correct method.

I am looking forward for your kind replying
Best wishes for your life!

Zoey

Get allen corresponding slice

Hello!

I started using DeepSlice to automatically find the slice plane. I can visualize the result in QuickNII and it seems to be working fine.

What I would like to do next is convert the anchor vectors to slice the 3D brain dataset from ABA and apply non-rigid deformation to improve the matching and then perform mapping. I have thousands of slices with only a few per animal and I need to do co-localisation so I can not really do something manual here. I tried to look for converting the anchor vectors with no luck.

Thanks and keep up the good work with this fantastic tool!

Dataset used for trainig

Can you please provide the dataset link? I the paper it is mentioned: "included 131k images from the Allen database of slide-mounted histological sections processed for in situ hybridization (ISH), immunohistochemistry (IHC) or Nissl staining.”.

Unfortunately, I have difficulty finding the dataset to train the model and looking at this new benchmark. Ca you please provide more details?

Thanks a lot in regrets
Sivan Schwartz

Installing DeepSlice on macos M1 (arm64)

Hey! I'm trying to install this tool on macos arm64 architecture. I guess this package is not implemented for this kind of architecture.

I created a new environment with intelx86 miniconda, and used python 3.7 as you described (this version is not available for arm64 architecture). After creating the environment, I installed DeepSlice via pip.

Although I think I successfully installed it (pip list DeepSlice 1.1.6), when I try to import DeepSlice: zsh: illegal hardware instruction python. I get same error when I try to import tensorflow as tf.

If I use jupyter notebook on visual studio code, and I try to from DeepSlice improt DSModel, I get Running cells with 'DeepSlice' requires the ipykernel package. Run the following command to install 'ipykernel' into the Python environment. Command: 'conda install -n DeepSlice ipykernel --update-deps --force-reinstall'
Although I run the suggested command, I get this error continuously.

If I try to use the conda file provided in the repository, I get following errors:
`(base) annateruel@MacBook-MacBook-Pro-de-Anna conda_environments % conda env create -f DS-CPU.yml
Channels:

  • defaults
  • conda-forge
    Platform: osx-64
    Collecting package metadata (repodata.json): done
    Solving environment: | warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    / warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    failed
    Channels:
  • defaults
  • conda-forge
    Platform: osx-64
    Collecting package metadata (repodata.json): done
    Solving environment: \ warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    / warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
    failed

LibMambaUnsatisfiableError: Encountered problems while solving:

  • nothing provides requested _openmp_mutex ==4.5 1_gnu
  • nothing provides requested aiohttp ==3.8.1 py37h7f8727e_0
  • nothing provides requested anyio ==3.5.0 py37h06a4308_0
  • nothing provides requested argon2-cffi-bindings ==21.2.0 py37h7f8727e_0
  • nothing provides requested astor ==0.8.1 py37h06a4308_0
  • nothing provides requested blinker ==1.4 py37h06a4308_0
  • nothing provides requested blosc ==1.21.0 h8c45485_0
  • nothing provides requested bottleneck ==1.3.4 py37hce1f21e_0
  • nothing provides requested brotli ==1.0.9 he6710b0_2
  • nothing provides requested brotlipy ==0.7.0 py37h27cfd23_1003
  • nothing provides requested brunsli ==0.1 h2531618_0
  • nothing provides requested bzip2 ==1.0.8 h7b6447c_0
  • nothing provides requested c-ares ==1.18.1 h7f8727e_0
  • nothing provides requested ca-certificates ==2022.4.26 h06a4308_0
  • nothing provides requested certifi ==2022.5.18.1 py37h06a4308_0
  • nothing provides requested cffi ==1.15.0 py37hd667e15_1
  • nothing provides requested cfitsio ==3.470 hf0d0db6_6
  • nothing provides requested charls ==2.2.0 h2531618_0
  • nothing provides requested cryptography ==36.0.0 py37h9ce1e76_0
  • nothing provides requested curl ==7.80.0 h7f8727e_0
  • nothing provides requested cytoolz ==0.11.0 py37h7b6447c_0
  • nothing provides requested debugpy ==1.5.1 py37h295c915_0
  • nothing provides requested expat ==2.4.4 h295c915_0
  • nothing provides requested freetype ==2.11.0 h70c0345_0
  • nothing provides requested frozenlist ==1.2.0 py37h7f8727e_0
  • nothing provides requested gettext ==0.21.0 hf68c758_0
  • nothing provides requested giflib ==5.2.1 h7b6447c_0
  • nothing provides requested git ==2.34.1 pl5262hc120c5b_0
  • nothing provides requested grpcio ==1.42.0 py37hce63b2e_0
  • nothing provides requested h5py ==2.10.0 py37hd6299e0_1
  • nothing provides requested hdf5 ==1.10.6 hb1b8bf9_0
  • nothing provides requested icu ==58.2 he6710b0_3
  • nothing provides requested imagecodecs ==2021.8.26 py37h4cda21f_0
  • nothing provides requested importlib-metadata ==4.8.2 py37h06a4308_0
  • nothing provides requested intel-openmp ==2021.4.0 h06a4308_3561
  • nothing provides requested ipykernel ==6.4.1 py37h06a4308_1
  • nothing provides requested ipython ==7.31.1 py37h06a4308_0
  • nothing provides requested jpeg ==9d h7f8727e_0
  • nothing provides requested jupyter_core ==4.9.1 py37h06a4308_0
  • nothing provides requested jxrlib ==1.1 h7b6447c_2
  • nothing provides requested kiwisolver ==1.3.2 py37h295c915_0
  • nothing provides requested krb5 ==1.19.2 hac12032_0
  • nothing provides requested lcms2 ==2.12 h3be6417_0
  • nothing provides requested ld_impl_linux-64 ==2.35.1 h7274673_9
  • nothing provides requested lerc ==3.0 h295c915_0
  • nothing provides requested libaec ==1.0.4 he6710b0_1
  • nothing provides requested libcurl ==7.80.0 h0b77cf5_0
  • nothing provides requested libdeflate ==1.8 h7f8727e_5
  • nothing provides requested libedit ==3.1.20210910 h7f8727e_0
  • nothing provides requested libev ==4.33 h7f8727e_1
  • nothing provides requested libffi ==3.3 he6710b0_2
  • nothing provides requested libgcc-ng ==9.3.0 h5101ec6_17
  • nothing provides requested libgfortran-ng ==7.5.0 ha8ba4b0_17
  • nothing provides requested libgfortran4 ==7.5.0 ha8ba4b0_17
  • nothing provides requested libgomp ==9.3.0 h5101ec6_17
  • nothing provides requested libnghttp2 ==1.46.0 hce63b2e_0
  • nothing provides requested libpng ==1.6.37 hbc83047_0
  • nothing provides requested libprotobuf ==3.19.1 h4ff587b_0
  • nothing provides requested libsodium ==1.0.18 h7b6447c_0
  • nothing provides requested libssh2 ==1.9.0 h1ba5d50_1
  • nothing provides requested libstdcxx-ng ==9.3.0 hd4cf53a_17
  • nothing provides requested libtiff ==4.1.0 h2733197_1
  • nothing provides requested libwebp ==1.2.0 h89dd481_0
  • nothing provides requested libxml2 ==2.9.12 h03d6c58_0
  • nothing provides requested libxslt ==1.1.34 hc22bd24_0
  • nothing provides requested libzopfli ==1.0.3 he6710b0_0
  • nothing provides requested locket ==0.2.1 py37h06a4308_1
  • nothing provides requested lxml ==4.8.0 py37h1f438cf_0
  • nothing provides requested lz4-c ==1.9.3 h295c915_1
  • nothing provides requested markdown ==3.3.4 py37h06a4308_0
  • nothing provides requested markupsafe ==2.0.1 py37h27cfd23_0
  • nothing provides requested matplotlib-base ==3.5.1 py37ha18d171_0
  • nothing provides requested mistune ==0.8.4 py37h14c3975_1001
  • nothing provides requested mkl ==2021.4.0 h06a4308_640
  • nothing provides requested mkl-service ==2.4.0 py37h7f8727e_0
  • nothing provides requested mkl_fft ==1.3.1 py37hd3c417c_0
  • nothing provides requested mkl_random ==1.2.2 py37h51133e4_0
  • nothing provides requested multidict ==5.2.0 py37h7f8727e_2
  • nothing provides requested nbconvert ==6.3.0 py37h06a4308_0
  • nothing provides requested ncurses ==6.3 h7f8727e_2
  • nothing provides requested notebook ==6.4.8 py37h06a4308_0
  • nothing provides requested numexpr ==2.8.1 py37h6abb31d_0
  • nothing provides requested openjpeg ==2.4.0 h3ad879b_0
  • nothing provides requested openssl ==1.1.1o h7f8727e_0
  • nothing provides requested pcre2 ==10.37 he7ceb23_1
  • nothing provides requested perl ==5.26.2 h14c3975_0
  • nothing provides requested pillow ==8.4.0 py37h5aabda8_0
  • nothing provides requested pip ==21.2.2 py37h06a4308_0
  • nothing provides requested protobuf ==3.19.1 py37h295c915_0
  • nothing provides requested pyrsistent ==0.18.0 py37heee7806_0
  • nothing provides requested pysocks ==1.7.1 py37_1
  • nothing provides requested python ==3.7.11 h12debd9_0
  • nothing provides requested pywavelets ==1.1.1 py37h7b6447c_2
  • nothing provides requested pyyaml ==6.0 py37h7f8727e_1
  • nothing provides requested pyzmq ==22.3.0 py37h295c915_2
  • nothing provides requested readline ==8.1.2 h7f8727e_1
  • nothing provides requested scikit-image ==0.17.2 py37hdf5156a_0
  • nothing provides requested scikit-learn ==0.23.2 py37h0573a6f_0
  • nothing provides requested scipy ==1.7.3 py37hc147768_0
  • nothing provides requested setuptools ==58.0.4 py37h06a4308_0
  • nothing provides requested snappy ==1.1.8 he6710b0_0
  • nothing provides requested sniffio ==1.2.0 py37h06a4308_1
  • nothing provides requested sqlite ==3.37.2 hc218d9a_0
  • nothing provides requested tensorflow ==2.2.0 mkl_py37h6e9ce2d_0
  • nothing provides requested tensorflow-base ==2.2.0 mkl_py37hd506778_0
  • nothing provides requested termcolor ==1.1.0 py37h06a4308_1
  • nothing provides requested terminado ==0.13.1 py37h06a4308_0
  • nothing provides requested tk ==8.6.11 h1ccaba5_0
  • nothing provides requested tornado ==6.1 py37h27cfd23_0
  • nothing provides requested websocket-client ==0.58.0 py37h06a4308_4
  • nothing provides requested wrapt ==1.13.3 py37h7f8727e_2
  • nothing provides requested xz ==5.2.5 h7b6447c_0
  • nothing provides requested yaml ==0.2.5 h7b6447c_0
  • nothing provides requested yarl ==1.6.3 py37h27cfd23_0
  • nothing provides requested zeromq ==4.3.4 h2531618_0
  • nothing provides requested zfp ==0.5.5 h295c915_6
  • nothing provides requested zlib ==1.2.11 h7f8727e_4
  • nothing provides requested zstd ==1.4.9 haebb681_0
  • package flask-1.1.2-pyhd3eb1b0_0 requires jinja2 >=2.10.1,<3.0, but none of the providers can be installed

Could not solve for environment specs
The following packages are incompatible
β”œβ”€ _openmp_mutex ==4.5 1_gnu does not exist (perhaps a typo or a missing channel);
β”œβ”€ aiohttp ==3.8.1 py37h7f8727e_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ anyio ==3.5.0 py37h06a4308_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ argon2-cffi-bindings ==21.2.0 py37h7f8727e_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ astor ==0.8.1 py37h06a4308_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ blinker ==1.4 py37h06a4308_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ blosc ==1.21.0 h8c45485_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ bottleneck ==1.3.4 py37hce1f21e_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ brotli ==1.0.9 he6710b0_2 does not exist (perhaps a typo or a missing channel);
β”œβ”€ brotlipy ==0.7.0 py37h27cfd23_1003 does not exist (perhaps a typo or a missing channel);
β”œβ”€ brunsli ==0.1 h2531618_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ bzip2 ==1.0.8 h7b6447c_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ c-ares ==1.18.1 h7f8727e_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ ca-certificates ==2022.4.26 h06a4308_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ certifi ==2022.5.18.1 py37h06a4308_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ cffi ==1.15.0 py37hd667e15_1 does not exist (perhaps a typo or a missing channel);
β”œβ”€ cfitsio ==3.470 hf0d0db6_6 does not exist (perhaps a typo or a missing channel);
β”œβ”€ charls ==2.2.0 h2531618_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ cryptography ==36.0.0 py37h9ce1e76_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ curl ==7.80.0 h7f8727e_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ cytoolz ==0.11.0 py37h7b6447c_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ debugpy ==1.5.1 py37h295c915_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ expat ==2.4.4 h295c915_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ flask ==1.1.2 pyhd3eb1b0_0 is installable and it requires
β”‚ └─ jinja2 >=2.10.1,<3.0 , which can be installed;
β”œβ”€ freetype ==2.11.0 h70c0345_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ frozenlist ==1.2.0 py37h7f8727e_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ gettext ==0.21.0 hf68c758_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ giflib ==5.2.1 h7b6447c_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ git ==2.34.1 pl5262hc120c5b_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ grpcio ==1.42.0 py37hce63b2e_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ h5py ==2.10.0 py37hd6299e0_1 does not exist (perhaps a typo or a missing channel);
β”œβ”€ hdf5 ==1.10.6 hb1b8bf9_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ icu ==58.2 he6710b0_3 does not exist (perhaps a typo or a missing channel);
β”œβ”€ imagecodecs ==2021.8.26 py37h4cda21f_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ importlib-metadata ==4.8.2 py37h06a4308_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ intel-openmp ==2021.4.0 h06a4308_3561 does not exist (perhaps a typo or a missing channel);
β”œβ”€ ipykernel ==6.4.1 py37h06a4308_1 does not exist (perhaps a typo or a missing channel);
β”œβ”€ ipython ==7.31.1 py37h06a4308_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ jinja2 ==3.0.2 pyhd3eb1b0_0 is not installable because it conflicts with any installable versions previously reported;
β”œβ”€ jpeg ==9d h7f8727e_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ jupyter_core ==4.9.1 py37h06a4308_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ jxrlib ==1.1 h7b6447c_2 does not exist (perhaps a typo or a missing channel);
β”œβ”€ kiwisolver ==1.3.2 py37h295c915_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ krb5 ==1.19.2 hac12032_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ lcms2 ==2.12 h3be6417_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ ld_impl_linux-64 ==2.35.1 h7274673_9 does not exist (perhaps a typo or a missing channel);
β”œβ”€ lerc ==3.0 h295c915_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libaec ==1.0.4 he6710b0_1 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libcurl ==7.80.0 h0b77cf5_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libdeflate ==1.8 h7f8727e_5 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libedit ==3.1.20210910 h7f8727e_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libev ==4.33 h7f8727e_1 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libffi ==3.3 he6710b0_2 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libgcc-ng ==9.3.0 h5101ec6_17 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libgfortran-ng ==7.5.0 ha8ba4b0_17 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libgfortran4 ==7.5.0 ha8ba4b0_17 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libgomp ==9.3.0 h5101ec6_17 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libnghttp2 ==1.46.0 hce63b2e_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libpng ==1.6.37 hbc83047_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libprotobuf ==3.19.1 h4ff587b_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libsodium ==1.0.18 h7b6447c_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libssh2 ==1.9.0 h1ba5d50_1 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libstdcxx-ng ==9.3.0 hd4cf53a_17 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libtiff ==4.1.0 h2733197_1 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libwebp ==1.2.0 h89dd481_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libxml2 ==2.9.12 h03d6c58_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libxslt ==1.1.34 hc22bd24_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ libzopfli ==1.0.3 he6710b0_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ locket ==0.2.1 py37h06a4308_1 does not exist (perhaps a typo or a missing channel);
β”œβ”€ lxml ==4.8.0 py37h1f438cf_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ lz4-c ==1.9.3 h295c915_1 does not exist (perhaps a typo or a missing channel);
β”œβ”€ markdown ==3.3.4 py37h06a4308_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ markupsafe ==2.0.1 py37h27cfd23_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ matplotlib-base ==3.5.1 py37ha18d171_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ mistune ==0.8.4 py37h14c3975_1001 does not exist (perhaps a typo or a missing channel);
β”œβ”€ mkl-service ==2.4.0 py37h7f8727e_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ mkl ==2021.4.0 h06a4308_640 does not exist (perhaps a typo or a missing channel);
β”œβ”€ mkl_fft ==1.3.1 py37hd3c417c_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ mkl_random ==1.2.2 py37h51133e4_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ multidict ==5.2.0 py37h7f8727e_2 does not exist (perhaps a typo or a missing channel);
β”œβ”€ nbconvert ==6.3.0 py37h06a4308_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ ncurses ==6.3 h7f8727e_2 does not exist (perhaps a typo or a missing channel);
β”œβ”€ notebook ==6.4.8 py37h06a4308_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ numexpr ==2.8.1 py37h6abb31d_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ openjpeg ==2.4.0 h3ad879b_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ openssl ==1.1.1o h7f8727e_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ pcre2 ==10.37 he7ceb23_1 does not exist (perhaps a typo or a missing channel);
β”œβ”€ perl ==5.26.2 h14c3975_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ pillow ==8.4.0 py37h5aabda8_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ pip ==21.2.2 py37h06a4308_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ protobuf ==3.19.1 py37h295c915_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ pyrsistent ==0.18.0 py37heee7806_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ pysocks ==1.7.1 py37_1 does not exist (perhaps a typo or a missing channel);
β”œβ”€ python ==3.7.11 h12debd9_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ pywavelets ==1.1.1 py37h7b6447c_2 does not exist (perhaps a typo or a missing channel);
β”œβ”€ pyyaml ==6.0 py37h7f8727e_1 does not exist (perhaps a typo or a missing channel);
β”œβ”€ pyzmq ==22.3.0 py37h295c915_2 does not exist (perhaps a typo or a missing channel);
β”œβ”€ readline ==8.1.2 h7f8727e_1 does not exist (perhaps a typo or a missing channel);
β”œβ”€ scikit-image ==0.17.2 py37hdf5156a_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ scikit-learn ==0.23.2 py37h0573a6f_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ scipy ==1.7.3 py37hc147768_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ setuptools ==58.0.4 py37h06a4308_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ snappy ==1.1.8 he6710b0_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ sniffio ==1.2.0 py37h06a4308_1 does not exist (perhaps a typo or a missing channel);
β”œβ”€ sqlite ==3.37.2 hc218d9a_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ tensorflow-base ==2.2.0 mkl_py37hd506778_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ tensorflow ==2.2.0 mkl_py37h6e9ce2d_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ termcolor ==1.1.0 py37h06a4308_1 does not exist (perhaps a typo or a missing channel);
β”œβ”€ terminado ==0.13.1 py37h06a4308_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ tk ==8.6.11 h1ccaba5_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ tornado ==6.1 py37h27cfd23_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ websocket-client ==0.58.0 py37h06a4308_4 does not exist (perhaps a typo or a missing channel);
β”œβ”€ wrapt ==1.13.3 py37h7f8727e_2 does not exist (perhaps a typo or a missing channel);
β”œβ”€ xz ==5.2.5 h7b6447c_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ yaml ==0.2.5 h7b6447c_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ yarl ==1.6.3 py37h27cfd23_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ zeromq ==4.3.4 h2531618_0 does not exist (perhaps a typo or a missing channel);
β”œβ”€ zfp ==0.5.5 h295c915_6 does not exist (perhaps a typo or a missing channel);
β”œβ”€ zlib ==1.2.11 h7f8727e_4 does not exist (perhaps a typo or a missing channel);
└─ zstd ==1.4.9 haebb681_0 does not exist (perhaps a typo or a missing channel).`

I don't know if there's any other Macos M1 user that has faced the same issues and could help. I would appreciate any help because I like this tool and I'm interested in using it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.