jackyko1991 / vnet-tensorflow Goto Github PK
View Code? Open in Web Editor NEWImplementation of vnet in tensorflow for medical image segmentation
Implementation of vnet in tensorflow for medical image segmentation
when i run train.py ,something went wrong:KeyError: 'verbosity'
2020-06-14 16:28:44.548595: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
Traceback (most recent call last):
File "D:/vnet-tensorflow-master/train.py", line 625, in
tf.compat.v1.app.run()
File "E:\Ana\envs\py374\lib\site-packages\tensorflow_core\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "E:\Ana\envs\py374\lib\site-packages\absl\app.py", line 293, in run
flags_parser,
File "E:\Ana\envs\py374\lib\site-packages\absl\app.py", line 359, in run_init
logging.use_absl_handler()
File "E:\Ana\envs\py374\lib\site-packages\absl\logging_init.py", line 1148, in use_absl_handler
FLAGS['verbosity']._update_logging_levels() # pylint: disable=protected-access
File "E:\Ana\envs\py374\lib\site-packages\absl\flags_flagvalues.py", line 463, in getitem
return self._flags()[name]
KeyError: 'verbosity'
How can I solve this problem? (my python version is 3.7.4,tensorflow-gpu==1.14.0,Anaconda environment is ok. )
Any help would be appreciate.
hi ,thank you for sharing your repo.
can i use the same data for testing and evaluation?
how to train for multi class? For 2 classes , we assign 0 for background and 1 for foreground. but for 3 classes, i.e. background, organ, tumor ,what change should i take?
I am trying to use your code with my customized CT volume raw data.
you used simpleITK module to read the nii file. but it doesn't support raw file .
so I'm considering two options.
One is that I change your code to read raw file and the second option is that I change my raw file into nii file which you used.
I think second option is much better! but I didn't find the way how to convert raw file into nii file. :(
Do you think is there any way?
Hi,
I am looking into your tensorflow implementation of V-net and it's really interesting.
I have a question on the multichannel inputs because you say that your code handle them.
In the get_dataset function in the Niftidataset.py file, for each case you only append one image_path, i.e. you are only considering one image per case, as we can see in the code lines below.
for case in os.listdir(self.data_dir):
image_paths.append(os.path.join(self.data_dir,case,self.image_filename))
label_paths.append(os.path.join(self.data_dir,case,self.label_filename))
How do I adapt the code if I have 4 input images per case as in the tree below ?
.
├── ...
├── data
│ ├── testing
| | ├── case1
| | | ├── img_1.nii.gz
| | | ├── img_2.nii.gz
| | | ├── img_3.nii.gz
| | | ├── img_4.nii.gz
| | | └── label.nii.gz
| | ├── case2
| | | ├── img_1.nii.gz
| | | ├── img_2.nii.gz
| | | ├── img_3.nii.gz
| | | ├── img_4.nii.gz
| | | └── label.nii.gz
.
.
.
Thanks
Hi, thanks for the great repo. The 'Visual Representation' shows that kernel size used for Down Conv and Up Conv layers used is (2x2x1). However, it's (2x2x2) in the code. The latter one makes sense though. Could you please let me know if it's a typo in the figure?
Hi, there! Amazing work you have here. But I have a question.
I tried to run your main.py like this:
$ python3 main.py --config_json config.json --gpu 1
Unfortunately, the terminal showed several issues:
...
tensorflow.python.framework.errors_impl.OutOfRangeError: End of sequence
[[{{node IteratorGetNext}}]]
...
OutOfRangeError (see above for traceback): End of sequence
[[node IteratorGetNext (defined at /home/jeff/vnet-tensorflow/model.py:327) ]]
...
So, I tried to print the the path produced by def input_parser from NiftiDataset3D like this:
def input_parser(self,case):
case = case.decode("utf-8")
image_paths = []
for channel in range(len(self.image_filenames)):
image_paths.append(os.path.join(self.data_dir,case,self.image_filenames[channel]))
print(os.path.join(self.data_dir,case,self.image_filenames[channel]))
and the result is also fine:
/home/jeff/vnet-tensorflow/data/training/case1/img.nii.gz
/home/jeff/vnet-tensorflow/data/training/case3/img.nii.gz
/home/jeff/vnet-tensorflow/data/training/case2/img.nii.gz
Do you have any insights for these issues?
Notes: Currently, I am using Tensorflow v.1.13.1.
Thank you for sharing the code
I am working on some similar images like Lits
How many epoches and maximum iterations are being used to produce the results?
Also optimizer and it learning rate?
AssertionError: Length of DICE weight is 3, should be 2
What do you mean by this?
2020-04-21 15:10:25.662351: Start training...
2020-04-21 15:10:25.662382: Setting up Saver...
Traceback (most recent call last):
File "main.py", line 83, in <module>
main(args)
File "main.py", line 75, in main
model.train()
File "....../model.py", line 579, in train
shutil.rmtree(self.log_dir)
NameError: name 'shutil' is not defined
model.py is missing import shutil
Hi,
I am using this code for brain tumor segmention(BRATS dataset),and the result is not satisfying.In another issue you mentioned the data you uesd,i'm wondering if you can share the data so i can have a try.
Thanks
Hi there,
Thank you very much for posting your implementation. I was going through the code and noticed that you batch normalized x, but did you mean to batch normalize layer_input ? Otherwise the residual addition is just adding x to x...
This also happens in several other parts of the code by the way.
Line 48 in 4c60947
Thanks,
Brian
Hi,
thank you for sharing your code. I have a question about something you are doing here:
vnet-tensorflow/NiftiDataset3D.py
Line 131 in 4c60947
What is the reason to call np.transpose() on the axis? I am asking this question because I have seen it in different places but according to Tensorflow conv3d documentation the format of the spatial dimensions should be (z, y, x) already returned from sitk.GetArrayFromImage().
Am I missing something?
Hello,
I have a question about the train data that you used. In the quoted paper about the v-net, the authors used MRI-data from the "promise challenge" (https://promise12.grand-challenge.org/).
Are you testing your implementation with the same data?
Best regards,
Yves
Hi,@jackyko1991
Thank you for sharing such a great repo.I'm using this repo to segment medical grayscale images on a private dataset,but my segmentation target is very small, so I follow your notes in the NiftiDataset3D.py,replacing RandomCrop()with ConfidenceCrop2 in the image preprocessing section.But I got a problem:
RuntimeError: Exception thrown in SimpleITK RegionOfInterestImageFilter_Execute: D:\a\1\sitk-build\ITK\Modules\Core\Common\src\itkDataObject.cxx:393:
Requested region is (at least partially) outside the largest possible region.
I have also looked up some information, but I still don't know how to solve it.Could you give me some help?
BTW,My segmentation goal is small,the results were poor: the DICE score was low. I don't know how to improve it, Could you give me some hints?
Looking forward to your reply.
Thanks in advance.
Best.
Hello @jackyko1991
Thank you for your code. I trained and evaluated on some data-set (data_sphere) which is you provided and it work perfectly, thanks for that :)
I have trained vet_tensorflow with LITS challenge, https://drive.google.com/drive/u/0/folders/0B0vscETPGI1-Q1h1WFdEM2FHSUE, code train work well and I have some checkpoints data but when I run evaluation.py with random file such as volume1.nii which is about 40MB so it is taken > 32G ram (host size not GPU memory) and after that operator is frozen :(. My PC have 32G ram and 1080Ti GPU so it is OK for evaluate LITS data?
I have two questions
I am using python main.py -p phase --config config.json
to train and evaluate the data
From what I know, there should be no labels along testing data otherwise it's just training data. So, as explained in the folder hierarchy, why are labels provided with their corresponding images in the training folder? (I have not read the full paper yet)
Hello,
I succeeded to compile tensorflow 1.8.0 with ITK 4.13 and protobuf 3.5.0 and the project in cxx folder of this repo. But when I run the main.cxx code with Visual Studio 2015, I got an error from tf_inference.cpp line 143 which write this in my consol :
Non-OK-Status: m_sess->Run({}, vNames, {}, &out) status: Invalid argument: must specify at least one target to fetch or execute.
I am not familiar with tensorflow c++ api and with the "session.run" code provided by tensorflow so someone have an idea how to solve it ?
Thanks !
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.