albert597 / trailmap Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
Dear @AlbertPun,
I've just been trying to get TRAILMAP to work as one of our microscope facility users has very similar image data.
However, it appears that the file data/model-weights/trailmap_model.hdf5
is corrupted in the git repository. I tried cloning the repo using git clone
as well as downloading the repo as a zip file and extracting it.
Trailmap throws and exception. I can reproduce the exception with this minimalist python code.
Python 3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)]
Type 'copyright', 'credits' or 'license' for more information
IPython 7.8.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import h5py
In [2]: f = h5py.File("trailmap_model.hdf5", "r")
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-2-c2c40868ba1f> in <module>
----> 1 f = h5py.File("trailmap_model.hdf5", "r")
c:\users\volker\anaconda3\envs\napari_new\lib\site-packages\h5py\_hl\files.py in __init__(self, name, mode, driver, libver, userblo
k_size, swmr, rdcc_nslots, rdcc_nbytes, rdcc_w0, track_order, **kwds)
406 fid = make_fid(name, mode, userblock_size,
407 fapl, fcpl=make_fcpl(track_order=track_order),
--> 408 swmr=swmr)
409
410 if isinstance(libver, tuple):
c:\users\volker\anaconda3\envs\napari_new\lib\site-packages\h5py\_hl\files.py in make_fid(name, mode, userblock_size, fapl, fcpl, s
mr)
171 if swmr and swmr_support:
172 flags |= h5f.ACC_SWMR_READ
--> 173 fid = h5f.open(name, flags, fapl=fapl)
174 elif mode == 'r+':
175 fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)
h5py\_objects.pyx in h5py._objects.with_phil.wrapper()
h5py\_objects.pyx in h5py._objects.with_phil.wrapper()
h5py\h5f.pyx in h5py.h5f.open()
OSError: Unable to open file (file signature not found)
Trying to inspect the HDF5 file with HDFView or diagnosing it with h5debug also show that the file is not a valid hdf5.
I presume that this may have to do with git's
handling of large binary files ?
Would you be able to share the pre-trained model using some other method (Google Drive, Dropbox) ?
Dear @AlbertPun and @dfriedma ,
These are more questions than issues, I hope it fits here.
-I was wondering how does the network behave in the presence of neuronal bodies present in the images, is it able to remove them? Or should we clean them up before inference?
-The threshold
variable is set at a default of 0.01...what does it represent? As far as I understand, if the chunk to be processed in the network has a maximum value higher than threshold then it will be added to the queue. Fine. But what does this value represent? Whatever shade that is not pitch black, right? Is it in float32 or int16?
Thanks!
A
Hi all, I am running into this error below when I try to run python3 segment_brain_batch.py, here's my error below, and what I am running in the conda environment and the drivers I have compiled.
python3 segment_brain_batch.py ~/Desktop/Training/BG_TS_01_ps6/ ~/Desktop/Training/BG_TS_01_auto/
2021-06-30 08:35:24.484875: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2021-06-30 08:35:24.558357: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:17:00.0 name: Quadro RTX 5000 computeCapability: 7.5
coreClock: 1.815GHz coreCount: 48 deviceMemorySize: 15.74GiB deviceMemoryBandwidth: 417.29GiB/s
2021-06-30 08:35:24.559367: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 1 with properties:
pciBusID: 0000:73:00.0 name: Quadro RTX 5000 computeCapability: 7.5
coreClock: 1.815GHz coreCount: 48 deviceMemorySize: 15.75GiB deviceMemoryBandwidth: 417.29GiB/s
2021-06-30 08:35:24.559592: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2021-06-30 08:35:24.561353: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2021-06-30 08:35:24.563171: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2021-06-30 08:35:24.563435: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2021-06-30 08:35:24.565187: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2021-06-30 08:35:24.566166: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2021-06-30 08:35:24.569965: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2021-06-30 08:35:24.573594: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0, 1
2021-06-30 08:35:24.573925: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA
2021-06-30 08:35:24.582588: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2400000000 Hz
2021-06-30 08:35:24.584244: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5556a8390420 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-06-30 08:35:24.584270: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2021-06-30 08:35:24.825760: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:17:00.0 name: Quadro RTX 5000 computeCapability: 7.5
coreClock: 1.815GHz coreCount: 48 deviceMemorySize: 15.74GiB deviceMemoryBandwidth: 417.29GiB/s
2021-06-30 08:35:24.826740: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 1 with properties:
pciBusID: 0000:73:00.0 name: Quadro RTX 5000 computeCapability: 7.5
coreClock: 1.815GHz coreCount: 48 deviceMemorySize: 15.75GiB deviceMemoryBandwidth: 417.29GiB/s
2021-06-30 08:35:24.826815: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2021-06-30 08:35:24.826828: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2021-06-30 08:35:24.826840: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2021-06-30 08:35:24.826851: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2021-06-30 08:35:24.826862: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2021-06-30 08:35:24.826891: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2021-06-30 08:35:24.826903: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2021-06-30 08:35:24.830335: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0, 1
2021-06-30 08:35:24.830381: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2021-06-30 08:35:24.832083: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-06-30 08:35:24.832098: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0 1
2021-06-30 08:35:24.832103: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N Y
2021-06-30 08:35:24.832123: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 1: Y N
2021-06-30 08:35:24.835324: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14814 MB memory) -> physical GPU (device: 0, name: Quadro RTX 5000, pci bus id: 0000:17:00.0, compute capability: 7.5)
2021-06-30 08:35:24.837571: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 15196 MB memory) -> physical GPU (device: 1, name: Quadro RTX 5000, pci bus id: 0000:73:00.0, compute capability: 7.5)
2021-06-30 08:35:24.840103: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5556ac1a1930 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2021-06-30 08:35:24.840122: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Quadro RTX 5000, Compute Capability 7.5
2021-06-30 08:35:24.840127: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (1): Quadro RTX 5000, Compute Capability 7.5
/home/quantum/Desktop/Training/seg-BG_TS_01_ps6 already exists. Will be overwritten
Name: BG_TS_01_ps6
[ ] 0% ETA: Pending 2021-06-30 08:35:31.268174: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2021-06-30 08:35:32.330143: W tensorflow/stream_executor/gpu/redzone_allocator.cc:312] Not found: ./bin/ptxas not found
Relying on driver to perform ptx compilation. This message will be only logged once.
2021-06-30 08:35:32.406519: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
[==================================== ] 90% ETA: 0.3 mins /home/quantum/software/TRAILMAP/inference/segment_brain.py:60: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
vol = np.array(vol)
TypeError: only size-1 arrays can be converted to Python scalars
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "segment_brain_batch.py", line 43, in <module>
segment_brain(input_folder, output_folder, model)
File "/home/quantum/software/TRAILMAP/inference/segment_brain.py", line 146, in segment_brain
section = read_folder_section(input_folder, end_aligned, end_aligned + input_dim).astype('float32')
ValueError: setting an array element with a sequence.
This is my current CUDA/Driver/GPU set up
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro RTX 5000 Off | 00000000:17:00.0 On | Off |
| 33% 34C P8 19W / 230W | 405MiB / 16116MiB | 1% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 Quadro RTX 5000 Off | 00000000:73:00.0 Off | Off |
| 33% 31C P8 11W / 230W | 11MiB / 16125MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1345 G /usr/lib/xorg/Xorg 39MiB |
| 0 N/A N/A 2217 G /usr/lib/xorg/Xorg 130MiB |
| 0 N/A N/A 2351 G /usr/bin/gnome-shell 172MiB |
| 0 N/A N/A 3106 G ...mviewer/tv_bin/TeamViewer 14MiB |
| 0 N/A N/A 3454 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 3664 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 8381 G /usr/lib/firefox/firefox 25MiB |
| 0 N/A N/A 13134 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 757623 G gnome-control-center 3MiB |
| 1 N/A N/A 1345 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 2217 G /usr/lib/xorg/Xorg 4MiB |
And then, here's all that is in my condo environment.
# packages in environment at /home/quantum/anaconda3/envs/trailmap_env:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 4.5 1_gnu
_tflow_select 2.1.0 gpu
absl-py 0.13.0 py37h06a4308_0
aiohttp 3.7.4 py37h27cfd23_1
astor 0.8.1 py37h06a4308_0
async-timeout 3.0.1 py37h06a4308_0
attrs 21.2.0 pyhd3eb1b0_0
blas 1.0 mkl
blinker 1.4 py37h06a4308_0
brotlipy 0.7.0 py37h27cfd23_1003
bzip2 1.0.8 h7b6447c_0
c-ares 1.17.1 h27cfd23_0
ca-certificates 2021.5.25 h06a4308_1
cachetools 4.2.2 pyhd3eb1b0_0
cairo 1.16.0 hf32fb01_1
certifi 2021.5.30 py37h06a4308_0
cffi 1.14.5 py37h261ae71_0
chardet 3.0.4 py37h06a4308_1003
click 8.0.1 pyhd3eb1b0_0
coverage 5.5 py37h27cfd23_2
cryptography 3.4.7 py37hd23ed53_0
cudatoolkit 10.1.243 h6bb024c_0
cudnn 7.6.5 cuda10.1_0
cupti 10.1.168 0
cython 0.29.23 py37h2531618_0
ffmpeg 4.0 hcdf2ecd_0
fontconfig 2.13.1 h6c09931_0
freeglut 3.0.0 hf484d3e_5
freetype 2.10.4 h5ab3b9f_0
gast 0.2.2 py37_0
glib 2.68.2 h36276a3_0
google-auth 1.32.0 pyhd3eb1b0_0
google-auth-oauthlib 0.4.4 pyhd3eb1b0_0
google-pasta 0.2.0 py_0
graphite2 1.3.14 h23475e2_0
grpcio 1.36.1 py37h2157cd5_1
h5py 2.8.0 py37h989c5e5_3
harfbuzz 1.8.8 hffaf4a1_0
hdf5 1.10.2 hba1933b_1
icu 58.2 he6710b0_3
idna 2.10 pyhd3eb1b0_0
importlib-metadata 3.10.0 py37h06a4308_0
intel-openmp 2021.2.0 h06a4308_610
jasper 2.0.14 h07fcdf6_1
jpeg 9b h024ee3a_2
keras-applications 1.0.8 py_1
keras-preprocessing 1.1.2 pyhd3eb1b0_0
ld_impl_linux-64 2.35.1 h7274673_9
libffi 3.3 he6710b0_2
libgcc-ng 9.3.0 h5101ec6_17
libgfortran-ng 7.5.0 ha8ba4b0_17
libgfortran4 7.5.0 ha8ba4b0_17
libglu 9.0.0 hf484d3e_1
libgomp 9.3.0 h5101ec6_17
libopencv 3.4.2 hb342d67_1
libopus 1.3.1 h7b6447c_0
libpng 1.6.37 hbc83047_0
libprotobuf 3.14.0 h8c45485_0
libstdcxx-ng 9.3.0 hd4cf53a_17
libtiff 4.2.0 h85742a9_0
libuuid 1.0.3 h1bed415_2
libvpx 1.7.0 h439df22_0
libwebp-base 1.2.0 h27cfd23_0
libxcb 1.14 h7b6447c_0
libxml2 2.9.12 h03d6c58_0
lz4-c 1.9.3 h2531618_0
markdown 3.3.4 py37h06a4308_0
mkl 2021.2.0 h06a4308_296
mkl-service 2.3.0 py37h27cfd23_1
mkl_fft 1.3.0 py37h42c9631_2
mkl_random 1.2.1 py37ha9443f7_2
multidict 5.1.0 py37h27cfd23_2
ncurses 6.2 he6710b0_1
numpy 1.20.2 py37h2d18471_0
numpy-base 1.20.2 py37hfae3a4d_0
oauthlib 3.1.0 py_0
olefile 0.46 py37_0
opencv 3.4.2 py37h6fd60c2_1
openssl 1.1.1k h27cfd23_0
opt_einsum 3.3.0 pyhd3eb1b0_1
pcre 8.45 h295c915_0
pillow 7.0.0 py37hb39fc2d_0
pip 21.1.2 py37h06a4308_0
pixman 0.40.0 h7b6447c_0
protobuf 3.14.0 py37h2531618_1
py-opencv 3.4.2 py37hb342d67_1
pyasn1 0.4.8 py_0
pyasn1-modules 0.2.8 py_0
pycparser 2.20 py_2
pyjwt 1.7.1 py37_0
pyopenssl 20.0.1 pyhd3eb1b0_1
pysocks 1.7.1 py37_1
python 3.7.10 h12debd9_4
readline 8.1 h27cfd23_0
requests 2.25.1 pyhd3eb1b0_0
requests-oauthlib 1.3.0 py_0
rsa 4.7.2 pyhd3eb1b0_1
scipy 1.6.2 py37had2a1c9_1
setuptools 52.0.0 py37h06a4308_0
six 1.16.0 pyhd3eb1b0_0
sqlite 3.36.0 hc218d9a_0
tensorboard 2.4.0 pyhc547734_0
tensorboard-plugin-wit 1.6.0 py_0
tensorflow 2.1.0 gpu_py37h7a4bb67_0
tensorflow-base 2.1.0 gpu_py37h6c5654b_0
tensorflow-estimator 2.5.0 pyh7b7c402_0
tensorflow-gpu 2.1.0 h0d30ee6_0
termcolor 1.1.0 py37h06a4308_1
tk 8.6.10 hbc83047_0
typing-extensions 3.7.4.3 hd3eb1b0_0
typing_extensions 3.7.4.3 pyh06a4308_0
urllib3 1.26.4 pyhd3eb1b0_0
werkzeug 1.0.1 pyhd3eb1b0_0
wheel 0.36.2 pyhd3eb1b0_0
wrapt 1.12.1 py37h7b6447c_1
xz 5.2.5 h7b6447c_0
yarl 1.6.3 py37h27cfd23_0
zipp 3.4.1 pyhd3eb1b0_0
zlib 1.2.11 h7b6447c_3
zstd 1.4.9 haebb681_0
Thank you so much for your help and for the awesome software!
Dear @AlbertPun,
I find the major hurdle here to start using the program is to label our own samples. If you could provide some script (as undocumented or basic as it might be) to perform that task, it would be a great help.
Several doubts:
-How many training samples does the network requires if training from scratch?
-How many training samples does the network requires for transfer learning?
-Do you define a training sample as each 64x64x64 px cube with 1-2 labeled slices?
Thanks a lot,
Best,
Augusto
Hi @AlbertPun and @dfriedma ,
I ALMOST succesfully inferred my own half brain stack with your network (I have had no problems with the test dataset provided).
However, this error came up almost at the end:
2020-05-13 16:52:32.160325: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce RTX 2080 Ti, Compute Capability 7.5
Name: 200507_1034_561_080X_Dynamic_12-05-45
[ ] 0% ETA: Pending 2020-05-13 16:52:34.550799: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-05-13 16:52:35.416072: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
[===================================== ] 94% ETA: 1.4 mins TypeError: only size-1 arrays can be converted to Python scalars
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "segment_brain_batch.py", line 43, in <module>
segment_brain(input_folder, output_folder, model)
File "/home/augustoer/TRAILMAP/inference/segment_brain.py", line 121, in segment_brain
section = read_folder_section(input_folder, section_index, section_index + input_dim).astype('float32')
ValueError: setting an array element with a sequence.
Any ideas on how to fix it?
Thanks
A
Hi, everyone,
I tried to install Trailmap, but unfortunately got the following error:
Downloading data/model-weights/trailmap_model.hdf5 (229 MB)
Error downloading object: data/model-weights/trailmap_model.hdf5 (f866d8c):
Smudge error: Error downloading data/model-weights/trailmap_model.hdf5 (f866d8c068a52fc6c07fbbdebb2dc63d2c678a45936111fcf62ca4351aa3537d):
batch response: This repository is over its data quota.
Account responsible for LFS bandwidth should purchase more data packs to restore access.
Is there a way to download the model weights?
Thank you!
After sorting out the issues with tensorflow and our 2080Ti card I ran into the following issue
when trying to segment the testing volume.
2020-02-13 15:10:59.996792: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10080 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:65:00.0, compute capability: 7.5)
data/testing/example-chunk/seg- already exists. Will be overwritten
Name:
[================================= ] 83% ETA: 0.0 mins Traceback (most recent call last):
File "segment_brain_batch.py", line 40, in <module>
segment_brain(input_folder, output_folder, model)
File "/home/vhil0002/Github/tmp/TRAILMAP/inference/segment_brain.py", line 146, in segment_brain
section = read_folder_section(input_folder, end_aligned, end_aligned + input_dim).astype('float32')
ValueError: setting an array element with a sequence.
I was able to trace this down to the following lines of code:
It turns out that TRAILMAP creates a seg-
subfolder in the folder with the images. This folder gets added to the list of tiff files returned by get_dir(...)
.
My quick and dirty workaround was to modify the line as follows:
tiffs = [os.path.join(path, f) for f in os.listdir(path) if f[0] != '.' and f.endswith('.tif')]
This is not ideal as it excludes .tiff
, .TIF
, .png
etc. but it was good enough to get it working.
Hi,
I am wondering if you could provide an export .yml file of the conda environment you use, with build/channel information? Running this with tensorflow 2.1 installed from conda, I get an error on importing tensorflow (can't find some dll file). I can fix this error by installing Tensorflow2.1 from pip, but that build does not engage the GPU, as it won't play well with my conda-installed cudatoolkit/cudnn versions. I think cloning from a .yml with the full build information might be the best way to get this correctly installed at this point.
Thank you very much!
Edit: I was able to fix this by updating nvidia driver & cuda versions on root, and GPU is now engaged! A .yml with the conda environment may be helpful for others in the future though.
Would it be possible to give a full tested instruction set for either a pip or conda install?
At the very first step with pip, I get this:
$ pip3 install tensorflow=1.9
ERROR: Invalid requirement: 'tensorflow=1.9'
so it looks like there is a small syntax error. And then
$ /usr/local/bin/pip3 install tensorflow==1.9
ERROR: Could not find a version that satisfies the requirement tensorflow==1.9 (from versions: 1.13.0rc1, 1.13.0rc2, 1.13.1, 1.13.2, 1.14.0rc0, 1.14.0rc1, 1.14.0, 1.15.0rc0, 1.15.0rc1, 1.15.0rc2, 1.15.0rc3, 1.15.0, 2.0.0a0, 2.0.0b0, 2.0.0b1, 2.0.0rc0, 2.0.0rc1, 2.0.0rc2, 2.0.0)
ERROR: No matching distribution found for tensorflow==1.9
which I think is because tensorflow 1.9
requires an old version of Python (<3.7.0). It would be good to indicate if a special python version is required and/or if the tensor flow version number is a hard requirement.
For the python novice, I wouldn't mind a pointer to instructions about setting a new virtual env/pyenv with the right python version.
Hello!
I am trying to create the required environment using the package versions detailed in the ReadMe file: tensorflow-gpu==2.1, opencv==3.4.2, pillow==7.0.0, and h5py==2.10 with Python 3.7 and I am getting the following message:
Would it be possible to get an updated list of requirements?
Thanks!
Hi,
I manage to successfully predict the test dataset when using tensorflow for CPU, although it is very slow.
When I try running with tensorflow-gpu=1.9 I run into the following error:
/home/vhil0002/anaconda3/envs/trailmap/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/vhil0002/anaconda3/envs/trailmap/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/vhil0002/anaconda3/envs/trailmap/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/vhil0002/anaconda3/envs/trailmap/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/vhil0002/anaconda3/envs/trailmap/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/vhil0002/anaconda3/envs/trailmap/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
2020-02-12 14:37:21.233352: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open /home/vhil0002/Github/TRAILMAP/data/model-weights/trailmap_model.hdf5: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
2020-02-12 14:37:21.377533: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA
2020-02-12 14:37:21.636131: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1392] Found device 0 with properties:
name: GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.635
pciBusID: 0000:65:00.0
totalMemory: 10.73GiB freeMemory: 9.98GiB
2020-02-12 14:37:21.636165: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1471] Adding visible gpu devices: 0
2020-02-12 14:37:22.148132: I tensorflow/core/common_runtime/gpu/gpu_device.cc:952] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-02-12 14:37:22.148167: I tensorflow/core/common_runtime/gpu/gpu_device.cc:958] 0
2020-02-12 14:37:22.148173: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: N
2020-02-12 14:37:22.148399: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9637 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:65:00.0, compute capability: 7.5)
Name:
[ ] 0% ETA: Pending 2020-02-12 14:37:25.992729: E tensorflow/stream_executor/cuda/cuda_blas.cc:647] failed to run cuBLAS routine cublasSgemm_v2: CUBLAS_STATUS_EXECUTION_FAILED
Traceback (most recent call last):
File "segment_brain_batch.py", line 40, in <module>
segment_brain(input_folder, output_folder, model)
File "/home/vhil0002/Github/TRAILMAP/inference/segment_brain.py", line 127, in segment_brain
section_seg = helper_segment_section(model, section_vol)
File "/home/vhil0002/Github/TRAILMAP/inference/segment_brain.py", line 237, in helper_segment_section
output = np.squeeze(model.predict(batch_input)[:, :, :, :, [0]])
File "/home/vhil0002/anaconda3/envs/trailmap/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1478, in predict
self, x, batch_size=batch_size, verbose=verbose, steps=steps)
File "/home/vhil0002/anaconda3/envs/trailmap/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_arrays.py", line 363, in predict_loop
batch_outs = f(ins_batch)
File "/home/vhil0002/anaconda3/envs/trailmap/lib/python3.6/site-packages/tensorflow/python/keras/backend.py", line 2897, in __call__
fetched = self._callable_fn(*array_vals)
File "/home/vhil0002/anaconda3/envs/trailmap/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1454, in __call__
self._session._session, self._handle, args, status, None)
File "/home/vhil0002/anaconda3/envs/trailmap/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 519, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InternalError: Blas SGEMM launch failed : m=699840, n=1, k=64
[[Node: conv3d_14/Conv3D = Conv3D[T=DT_FLOAT, data_format="NDHWC", dilations=[1, 1, 1, 1, 1], padding="VALID", strides=[1, 1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/device:GPU:0"](batch_normalization_13/batchnorm/add_1, conv3d_
14/Conv3D/ReadVariableOp)]]
I googled some of the error messages and some stackoverflow post suggested this may have to do with lack o video memory. However, the 2080Ti card should have the same amount of video memory as the 1080Ti that you mention in the paper. There is some video memory used for the desktop, but not much.
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.104 Driver Version: 410.104 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 208... On | 00000000:65:00.0 On | N/A |
| 41% 41C P8 28W / 260W | 327MiB / 10986MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 2278 G /usr/lib/xorg/Xorg 26MiB |
| 0 2321 G /usr/bin/gnome-shell 17MiB |
| 0 6186 G /usr/lib/xorg/Xorg 181MiB |
| 0 6329 G /usr/bin/gnome-shell 98MiB |
+-----------------------------------------------------------------------------+
Do you have any suggestions ?
Hi @AlbertPun,
This project looks awesome, and it's related to some stuff we're doing for whole brain image registration, cell detection, standard space analysis and visualisation (e.g. cellfinder). What we don't have however is a good way to segment axons in these datasets.
I have a couple of questions:
I couldn't find your email, but if you want to carry on this conversation via email, my address is [email protected]
.
Thanks!
Adam
note though that in contrast to your instructions I do not have h5py version 2.1
pip3 install h5py==2.1
as this version is no longer available on PyPI for installation with pip nor is it available with conda-forge. I tried to build version 2.1 from sources but haven't had luck so far. I also doubt that 2.1 would have produced incompatible hdf5 files.
Edited to add:
I have tested h5py version 2.6 and version 2.10 on Linux. HDFView was an older install on Windows.
Originally posted by @VolkerH in #3 (comment)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.