Giter VIP home page Giter VIP logo

Comments (8)

colIapse avatar colIapse commented on July 24, 2024

Dear Alexamousley

It seems that we are preprocessing the same dataset and met the same issues during surface recon...Have you solved the problem yet?

Kind Regrads

from dhcp-structural-pipeline.

alexamousley avatar alexamousley commented on July 24, 2024

Hello,

I'm glad to hear I am not the only one since it suggests it's not an issue with the pipeline setup on our end. Unfortunately, I have yet to find a solution. I am wondering if it could be an issue with the version of VTK but that is a bit of a shot in the dark. I will post if I find a solution, please let me know as well if you manage to get it working!

Best,
Lexi

from dhcp-structural-pipeline.

colIapse avatar colIapse commented on July 24, 2024

Dear Lexi
Good afternoon!

It's soo lucky to find someone who in the similiar situation.
Whenever an approach is found,I will contact you.Now I am planning to try some other data to exclude the possibility of data problem and see whether they will report error in the same stage.
The BCP dataset in my hand is the 2019 released version and sooo disorder.(You must have known that...)
I once tried another pipeline to preprocess the dataset (due to the age) which is DCAN-Labs/infan-abcd-bids-pipeline which can be found in docker and github. And that pipeline failed also (though in bids) ..

Wish we GOOD LUCK and finish the preprocessing sucessfully!

Kind Regards
Zhao Yu

from dhcp-structural-pipeline.

colIapse avatar colIapse commented on July 24, 2024

Dear Lexi
Good afternoon!

How's everything going?
I am not sure whether any progress has been made in using this pipeline,but I will discuss with you my current train of thought.

Firstly,I tried another several data (They are all from the BCP dataset , converted by dcm2nii) and received different error reports.Some of them are just the same with yours and some are not.
Here are the err log examples. (in .txt format)
MNBCP334326-session1.surface.err.txt
subject1-session1.surface.err.txt
As you can see, one of the err report is almost the same as yours (except the Generic Warning) , and the existing and missing files are exactly the same.
There also exist different files. In the subject1-session1 file,there is no cerebrum-lh.vtp and cerebrum-rh.vtp in workdir\subject1-session1\surfaces\subject1-session1\vtk\temp-recon\subject1-session1.
I am still wondering why the same step and parameter can produce different error reports.Maybe just as you say,it's due to the version of the VTK but it's all in a black box.

Then,I turned to the NeuroStar and search for dHCP-structural-pipeline to request some information.In one topic which named Problem with dHCP-structural-pipeline using only T2W image https://neurostars.org/t/problem-with-dhcp-structural-pipeline-using-only-t2w-image/23279,someone ask for the surface error report and get some answer. (But his error report is not the same like ours)The replier just said that his error may due to the style of data and one '-recon-from-seg' parameter maybe can help that.
image

I am going to add this parameter during the pipeline running and to see whether our problem will also fix by that.Unfortunately,I use the docker version of the pipeline,and in the docker version,the -recon-from-seg parameter is unavailable.Then I will try the bash version later and to check whether the error will disappear by adding the parameter (just a try.I hold no hope for that...)
And it seems that the one who answered the question is an expert of dhcp field,I am going to send message to consult the question later too (But his last post was on Dec 21, '22,maybe still will get no hope)

And the next contact with you will proceed my progress,it may be a bit far from now due to some busy things.
How's everything going in your side?Hope you can run the pipeline fluently.The spring has came.Wish everything goes well.

Kind Regards
Zhao

from dhcp-structural-pipeline.

schuhschuh avatar schuhschuh commented on July 24, 2024

Hi both! Thanks for reaching out. I'm not the most active user in the project the past few years, but maybe I can shed some light on what is happening here.

The folder: '/surfaces/sub-116056-ses-3mo/vtk/' is empty besides the 'temp-recon' subfolder. The 'temp-recon' folder includes the files: cerebrum-lh.vtp, cerebrum-rh.vtp, and t2w-image.nii.gz. The command for surfaces appears to be also looking for 'region-labels.nii.gx', and 'cerebrum-1.vtp' that are not present in the folder.

This is probably expected and is no indication of the errors you see, nor that there would be anything wrong about the input data you are providing. Most temporary files are being deleted by the Python process implementing the surface reconstruction upon termination, whether it was successful or due to an error.

  • cerebrum-lh.vtp and cerebrum-rh.vtp are the initial surface meshes that were fit to the segmentation of left and right cortical structures separate. The input to this process is region-labels.nii.gz which is just a remapping and post-processing of the Draw-EM brain segmentation.
  • cerebrum-{i}.vtp are different stages of combining the two hemispheres into one single closed surface mesh. Subsequent steps of WM/cGM and cGM/CSF (pial) surface reconstructions are based on this combined mesh. This is to ensure that the final surface meshes for left and right hemispheres are disjoint, i.e., non-overlapping.

The "Error: Could not find a closed intersection with finite cutting plane near segmentation boundary!" relates to a step as part of the process that tries to combine the initial WM/cGM surfaces from the two hemispheres into a single surface. As part of this, it also includes the brain stem and cerebellum segmentations and tries to find a suitable planar cut for dividing the interior of that combined surface mesh into a) left hemisphere, b) right hemisphere, and c) brainstem and cerebellum. When your input data differs from the general shape found in neonates, the (generally hard-coded) settings for this step may indeed fail.

The Python script that is run as part of the structural pipeline is recon-neonatal-cortex as found in MIRTK (though maybe with some differences in which version of the code based on pipeline version). This command has a --debug option which you can use to keep intermediate files which may be helpful in debugging the issue. It would also be good if you can isolate the execution of this command by storing the inputs (outputs of preceding pipeline steps) and to then only run this command for one particular subject for which you observed the aforementioned error.

The Python function join_cortical_surfaces should be the one raising this error. In particular the merge-surfaces command of MIRTK here.

You would probably also by-pass the error using the --nocut option, though I am not too clear on what the implications of using this option would be for downstream tasks. It might be that the surface inflation and spherical mapping would be hampered and that it would generally break the structural image processing pipeline.

from dhcp-structural-pipeline.

alexamousley avatar alexamousley commented on July 24, 2024

Dear Lexi Good afternoon!

How's everything going? I am not sure whether any progress has been made in using this pipeline,but I will discuss with you my current train of thought.

Firstly,I tried another several data (They are all from the BCP dataset , converted by dcm2nii) and received different error reports.Some of them are just the same with yours and some are not. Here are the err log examples. (in .txt format) MNBCP334326-session1.surface.err.txt subject1-session1.surface.err.txt As you can see, one of the err report is almost the same as yours (except the Generic Warning) , and the existing and missing files are exactly the same. There also exist different files. In the subject1-session1 file,there is no cerebrum-lh.vtp and cerebrum-rh.vtp in workdir\subject1-session1\surfaces\subject1-session1\vtk\temp-recon\subject1-session1. I am still wondering why the same step and parameter can produce different error reports.Maybe just as you say,it's due to the version of the VTK but it's all in a black box.

Then,I turned to the NeuroStar and search for dHCP-structural-pipeline to request some information.In one topic which named Problem with dHCP-structural-pipeline using only T2W image https://neurostars.org/t/problem-with-dhcp-structural-pipeline-using-only-t2w-image/23279,someone ask for the surface error report and get some answer. (But his error report is not the same like ours)The replier just said that his error may due to the style of data and one '-recon-from-seg' parameter maybe can help that. image

I am going to add this parameter during the pipeline running and to see whether our problem will also fix by that.Unfortunately,I use the docker version of the pipeline,and in the docker version,the -recon-from-seg parameter is unavailable.Then I will try the bash version later and to check whether the error will disappear by adding the parameter (just a try.I hold no hope for that...) And it seems that the one who answered the question is an expert of dhcp field,I am going to send message to consult the question later too (But his last post was on Dec 21, '22,maybe still will get no hope)

And the next contact with you will proceed my progress,it may be a bit far from now due to some busy things. How's everything going in your side?Hope you can run the pipeline fluently.The spring has came.Wish everything goes well.

Kind Regards Zhao

My apologies for the late reply! We were setting up and running the other dHCP pipeline, linked in the Neurostars post. Sadly, despite using the -recon-from-seg we received the same error with -recon-neonatal-cortex. Have you made any progress? Hope you are well!

Best,
Lexi

from dhcp-structural-pipeline.

colIapse avatar colIapse commented on July 24, 2024

Dear Lexi Good afternoon!
How's everything going? I am not sure whether any progress has been made in using this pipeline,but I will discuss with you my current train of thought.
Firstly,I tried another several data (They are all from the BCP dataset , converted by dcm2nii) and received different error reports.Some of them are just the same with yours and some are not. Here are the err log examples. (in .txt format) MNBCP334326-session1.surface.err.txt subject1-session1.surface.err.txt As you can see, one of the err report is almost the same as yours (except the Generic Warning) , and the existing and missing files are exactly the same. There also exist different files. In the subject1-session1 file,there is no cerebrum-lh.vtp and cerebrum-rh.vtp in workdir\subject1-session1\surfaces\subject1-session1\vtk\temp-recon\subject1-session1. I am still wondering why the same step and parameter can produce different error reports.Maybe just as you say,it's due to the version of the VTK but it's all in a black box.
Then,I turned to the NeuroStar and search for dHCP-structural-pipeline to request some information.In one topic which named Problem with dHCP-structural-pipeline using only T2W image https://neurostars.org/t/problem-with-dhcp-structural-pipeline-using-only-t2w-image/23279,someone ask for the surface error report and get some answer. (But his error report is not the same like ours)The replier just said that his error may due to the style of data and one '-recon-from-seg' parameter maybe can help that. image
I am going to add this parameter during the pipeline running and to see whether our problem will also fix by that.Unfortunately,I use the docker version of the pipeline,and in the docker version,the -recon-from-seg parameter is unavailable.Then I will try the bash version later and to check whether the error will disappear by adding the parameter (just a try.I hold no hope for that...) And it seems that the one who answered the question is an expert of dhcp field,I am going to send message to consult the question later too (But his last post was on Dec 21, '22,maybe still will get no hope)
And the next contact with you will proceed my progress,it may be a bit far from now due to some busy things. How's everything going in your side?Hope you can run the pipeline fluently.The spring has came.Wish everything goes well.
Kind Regards Zhao

My apologies for the late reply! We were setting up and running the other dHCP pipeline, linked in the Neurostars post. Sadly, despite using the -recon-from-seg we received the same error with -recon-neonatal-cortex. Have you made any progress? Hope you are well!

Best, Lexi

Dear Lexi

Good Afternoon! I’m really sorry for taking soooo long to reply. Last month some accidents have happened upon me: I have fractured my left arm and then have done an operation.During last month, All time was spent in the hospital and I just have recovered and begin to work these days.Please accept my apology.

Meanwhile,another sad thing is that there actually goes no further in the problem. I still don’t know how to solve the problem or skip the error report.(Sorry for that again...).Before my accident,I was busy on reading some articles for one review and finish my homework.

But some new ideas do appear.Just like schuhschuh mentioned in Apr 5 ,the reason why it goes wrong may because the input data differs from neonates’ general shape. The range of BCP is from 0 to 5 years old. And the dHCP pipeline is prepared for neonates aged from 0 to 40wk. In my several trials, those aged between 0 to 40wk in BCP do actually can successfully run the dHCP pipeline.Maybe you can check whether the age of your successfully-preprocessed infants are in this period.
And I haven’t tried whether the --nocut option mentioned by schuhschuh can successfully solve the problem.But I ran the dHCP pipeline in docker previous,and maybe in order to add that option, the python scripts should be revised.I haven’t tried it yet. Maybe you can try to use the option and see what will go on.

What Schuhschuh says do inspire me a lot.Later I will go on to browse those pages to find where to go next. Hope it also helpful to you.
Sorry for my late reply again.Have you made any progress?Wish everything goes well.(And speaking from my experience, the most important thing is that taking care of health)

Kind Regards
Zhao

from dhcp-structural-pipeline.

colIapse avatar colIapse commented on July 24, 2024

Hi both! Thanks for reaching out. I'm not the most active user in the project the past few years, but maybe I can shed some light on what is happening here.

The folder: '/surfaces/sub-116056-ses-3mo/vtk/' is empty besides the 'temp-recon' subfolder. The 'temp-recon' folder includes the files: cerebrum-lh.vtp, cerebrum-rh.vtp, and t2w-image.nii.gz. The command for surfaces appears to be also looking for 'region-labels.nii.gx', and 'cerebrum-1.vtp' that are not present in the folder.

This is probably expected and is no indication of the errors you see, nor that there would be anything wrong about the input data you are providing. Most temporary files are being deleted by the Python process implementing the surface reconstruction upon termination, whether it was successful or due to an error.

  • cerebrum-lh.vtp and cerebrum-rh.vtp are the initial surface meshes that were fit to the segmentation of left and right cortical structures separate. The input to this process is region-labels.nii.gz which is just a remapping and post-processing of the Draw-EM brain segmentation.
  • cerebrum-{i}.vtp are different stages of combining the two hemispheres into one single closed surface mesh. Subsequent steps of WM/cGM and cGM/CSF (pial) surface reconstructions are based on this combined mesh. This is to ensure that the final surface meshes for left and right hemispheres are disjoint, i.e., non-overlapping.

The "Error: Could not find a closed intersection with finite cutting plane near segmentation boundary!" relates to a step as part of the process that tries to combine the initial WM/cGM surfaces from the two hemispheres into a single surface. As part of this, it also includes the brain stem and cerebellum segmentations and tries to find a suitable planar cut for dividing the interior of that combined surface mesh into a) left hemisphere, b) right hemisphere, and c) brainstem and cerebellum. When your input data differs from the general shape found in neonates, the (generally hard-coded) settings for this step may indeed fail.

The Python script that is run as part of the structural pipeline is recon-neonatal-cortex as found in MIRTK (though maybe with some differences in which version of the code based on pipeline version). This command has a --debug option which you can use to keep intermediate files which may be helpful in debugging the issue. It would also be good if you can isolate the execution of this command by storing the inputs (outputs of preceding pipeline steps) and to then only run this command for one particular subject for which you observed the aforementioned error.

The Python function join_cortical_surfaces should be the one raising this error. In particular the merge-surfaces command of MIRTK here.

You would probably also by-pass the error using the --nocut option, though I am not too clear on what the implications of using this option would be for downstream tasks. It might be that the surface inflation and spherical mapping would be hampered and that it would generally break the structural image processing pipeline.

Dear schuhschuh

Sorry for the late reply! Last month my left arm was fractured and have spent the whole time in hospital.These days I just began to work and be able to reply.Please accept my apology!

And thanks a lot! What you say do inspire me a lot! I haven’t known the MIRTK before and only ran the pipeline in docker.And what you say about the .vtp file also helps me to understand.
Later I will go on browse the pages you mentioned.

Thanks for your help again!
Wish everything goes well.

Kind Regards
Zhao

from dhcp-structural-pipeline.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.