Giter VIP home page Giter VIP logo

synthseg's Introduction

SynthSeg

In this repository, we present SynthSeg, the first deep learning tool for segmentation of brain scans of any contrast and resolution. SynthSeg works out-of-the-box without any retraining, and is robust to:

  • any contrast
  • any resolution up to 10mm slice spacing
  • a wide array of populations: from young and healthy to ageing and diseased
  • scans with or without preprocessing: bias field correction, skull stripping, normalisation, etc.
  • white matter lesions.

    Generation examples


SynthSeg was first presented for the automated segmentation of brain scans of any contrast and resolution.

SynthSeg: Segmentation of brain MRI scans of any contrast and resolution without retraining
B. Billot, D.N. Greve, O. Puonti, A. Thielscher, K. Van Leemput, B. Fischl, A.V. Dalca, J.E. Iglesias
Medical Image Analysis (2023)
[ article | arxiv | bibtex ]

Then, we extended it to work on heterogeneous clinical scans, and to perform cortical parcellation and automated quality control.

Robust machine learning segmentation for large-scale analysis of heterogeneous clinical brain MRI datasets
B. Billot, M. Colin, Y. Cheng, S.E. Arnold, S. Das, J.E. Iglesias
PNAS (2023)
[ article | arxiv | bibtex ]


Here, we distribute our model to enable users to run SynthSeg on their own data. We emphasise that predictions are always given at 1mm isotropic resolution (regardless of the input resolution). The code can be run on the GPU (~15s per scan) or on the CPU (~1min).


New features and updates


01/03/2023: The papers for SynthSeg and SynthSeg 2.0 are out! ๐Ÿ“– ๐Ÿ“–
After a long review process for SynthSeg (Medical Image Analysis), and a much faster one for SynthSeg 2.0 (PNAS), both papers have been accepted nearly at the same time ! See the references above, or in the citation section.


04/10/2022: SynthSeg is available with Matlab! โญ
We are delighted that Matlab 2022b (and onwards) now includes SynthSeg in its Medical Image Toolbox. They have a documented example on how to use it. But, to simplify things, we wrote our own Matlab wrapper, which you can call in one single line. Just download this zip file, uncompress it, open Matlab, and type help SynthSeg for instructions.


29/06/2022: SynthSeg 2.0 is out ! โœŒ๏ธ
In addition to whole-brain segmentation, it now also performs Cortical parcellation, automated QC, and intracranial volume (ICV) estimation (see figure below). Also, most of these features are compatible with SynthSeg 1.0. (see table).

new features

table versions


01/03/2022: Robust version ๐Ÿ”จ
SynthSeg sometimes falters on scans with low signal-to-noise ratio, or with very low tissue contrast. For this reason, we developed a new model for increased robustness, named "SynthSeg-robust". You can use this mode when SynthSeg gives results like in the figure below:

Robust


29/10/2021: SynthSeg is now available on the dev version of FreeSurfer !! ๐ŸŽ‰
See here on how to use it.


Try it in one command !

Once all the python packages are installed (see below), you can simply test SynthSeg on your own data with:

python ./scripts/commands/SynthSeg_predict.py --i <input> --o <output> [--parc --robust --ct --vol <vol> --qc <qc> --post <post> --resample <resample>]

where:

  • <input> path to a scan to segment, or to a folder. This can also be the path to a text file, where each line is the path of an image to segment.
  • <output> path where the output segmentations will be saved. This must be the same type as <input> (i.e., the path to a file, a folder, or a text file where each line is the path to an output segmentation).
  • --parc (optional) to perform cortical parcellation in addition to whole-brain segmentation.
  • --robust (optional) to use the variant for increased robustness (e.g., when analysing clinical data with large space spacing). This can be slower than the other model.
  • --ct (optional) use on CT scans in Hounsfield scale. It clips intensities to [0, 80].
  • <vol> (optional) path to a CSV file where the volumes (in mm3) of all segmented regions will be saved for all scans (e.g. /path/to/volumes.csv). If <input> is a text file, so must be <vol>, for which each line is the path to a different CSV file corresponding to one subject only.
  • <qc> (optional) path to a CSV file where QC scores will be saved. The same formatting requirements as <vol> apply.
  • <post> (optional) path where the posteriors, given as soft probability maps, will be saved (same formatting requirements as for <output>).
  • <resample> (optional) SynthSeg segmentations are always given at 1mm isotropic resolution. Hence, images are always resampled internally to this resolution (except if they are already at 1mm resolution). Use this flag to save the resampled images (same formatting requirements as for <output>).

Additional optional flags are also available:

  • --cpu: (optional) to enforce the code to run on the CPU, even if a GPU is available.
  • --threads: (optional) number of threads to be used by Tensorflow (default uses one core). Increase it to decrease the runtime when using the CPU version.
  • --crop: (optional) to crop the input images to a given shape before segmentation. This must be divisible by 32. Images are cropped around their centre, and their segmentations are given at the original size. It can be given as a single (i.e., --crop 160), or several integers (i.e, --crop 160 128 192, ordered in RAS coordinates). By default the whole image is processed. Use this flag for faster analysis or to fit in your GPU.
  • --fast: (optional) to disable some operations for faster prediction (twice as fast, but slightly less accurate). This doesn't apply when the --robust flag is used.
  • --v1: (optional) to run the first version of SynthSeg (SynthSeg 1.0, updated 29/06/2022).

IMPORTANT: SynthSeg always give results at 1mm isotropic resolution, regardless of the input. However, this can cause some viewers to not correctly overlay segmentations on their corresponding images. In this case, you can use the --resample flag to obtain a resampled image that lives in the same space as the segmentation, such that they can be visualised together with any viewer.

The complete list of segmented structures is available in labels table.txt along with their corresponding values. This table also details the order in which the posteriors maps are sorted.


Installation

  1. Clone this repository.

  2. Create a virtual environment (i.e., with pip or conda) and install all the required packages.
    These depend on your python version, and here we list the requirements for Python 3.6 (requirements_3.6) and Python 3.8 (see requirements_3.8). The choice is yours, but in each case, please stick to the exact package versions.
    A first solution to install the dependencies, if you use pip, is to run setup.py (with and activated virtual environment): python setup.py install. Otherwise, we also give here the minimal commands to install the required packages using pip/conda for Python 3.6/3.8.

# Conda, Python 3.6:
conda create -n synthseg_36 python=3.6 tensorflow-gpu=2.0.0 keras=2.3.1 h5py==2.10.0 nibabel matplotlib -c anaconda -c conda-forge

# Conda, Python 3.8:
conda create -n synthseg_38 python=3.8 tensorflow-gpu=2.2.0 keras=2.3.1 nibabel matplotlib -c anaconda -c conda-forge

# Pip, Python 3.6:
pip install tensorflow-gpu==2.0.0 keras==2.3.1 nibabel==3.2.2 matplotlib==3.3.4

# Pip, Python 3.8:
pip install tensorflow-gpu==2.2.0 keras==2.3.1 protobuf==3.20.3 numpy==1.23.5 nibabel==5.0.1 matplotlib==3.6.2
  1. Go to this link UCL dropbox, and download the missing models. Then simply copy them to models.

  2. If you wish to run on the GPU, you will also need to install Cuda (10.0 for Python 3.6, 10.1 for Python 3.8), and CUDNN (7.6.5 for both). Note that if you used conda, these were already automatically installed.

That's it ! You're now ready to use SynthSeg ! ๐ŸŽ‰


How does it work ?

In short, we train a network with synthetic images sampled on the fly from a generative model based on the forward model of Bayesian segmentation. Crucially, we adopt a domain randomisation strategy where we fully randomise the generation parameters which are drawn at each minibatch from uninformative uniform priors. By exposing the network to extremely variable input data, we force it to learn domain-agnostic features. As a result, SynthSeg is able to readily segment real scans of any target domain, without retraining or fine-tuning.

The following figure first illustrates the workflow of a training iteration, and then provides an overview of the different steps of the generative model:

Overview

Finally we show additional examples of the synthesised images along with an overlay of their target segmentations:

Training data

If you are interested to learn more about SynthSeg, you can read the associated publication (see below), and watch this presentation, which was given at MIDL 2020 for a related article on a preliminary version of SynthSeg (robustness to MR contrast but not resolution).

Talk SynthSeg


Train your own model

This repository contains all the code and data necessary to train, validate, and test your own network. Importantly, the proposed method only requires a set of anatomical segmentations to be trained (no images), which we include in data. While the provided functions are thoroughly documented, we highly recommend to start with the following tutorials:

  • 1-generation_visualisation: This very simple script shows examples of the synthetic images used to train SynthSeg.

  • 2-generation_explained: This second script describes all the parameters used to control the generative model. We advise you to thoroughly follow this tutorial, as it is essential to understand how the synthetic data is formed before you start training your own models.

  • 3-training: This scripts re-uses the parameters explained in the previous tutorial and focuses on the learning/architecture parameters. The script here is the very one we used to train SynthSeg !

  • 4-prediction: This scripts shows how to make predictions, once the network has been trained.

  • 5-generation_advanced: Here we detail more advanced generation options, in the case of training a version of SynthSeg that is specific to a given contrast and/or resolution (although these types of variants were shown to be outperformed by the SynthSeg model trained in the 3rd tutorial).

  • 6-intensity_estimation: This script shows how to estimate the Gaussian priors of the GMM when training a contrast-specific version of SynthSeg.

  • 7-synthseg+: Finally, we show how the robust version of SynthSeg was trained.

These tutorials cover a lot of materials and will enable you to train your own SynthSeg model. Moreover, even more detailed information is provided in the docstrings of all functions, so don't hesitate to have a look at these !


Content

  • SynthSeg: this is the main folder containing the generative model and training function:

    • labels_to_image_model.py: contains the generative model for MRI scans.

    • brain_generator.py: contains the class BrainGenerator, which is a wrapper around labels_to_image_model. New images can simply be generated by instantiating an object of this class, and call the method generate_image().

    • training.py: contains code to train the segmentation network (with explanations for all training parameters). This function also shows how to integrate the generative model in a training setting.

    • predict.py: prediction and testing.

    • validate.py: includes code for validation (which has to be done offline on real images).

  • models: this is where you will find the trained model for SynthSeg.

  • data: this folder contains some examples of brain label maps if you wish to train your own SynthSeg model.

  • script: contains tutorials as well as scripts to launch trainings and testings from a terminal.

  • ext: includes external packages, especially the lab2im package, and a modified version of neuron.


Citation/Contact

This code is under Apache 2.0 licensing.

  • If you use the cortical parcellation, automated QC, or robust version, please cite the following paper:

Robust machine learning segmentation for large-scale analysisof heterogeneous clinical brain MRI datasets
B. Billot, M. Colin, Y. Cheng, S.E. Arnold, S. Das, J.E. Iglesias
PNAS (2023)
[ article | arxiv | bibtex ]

  • Otherwise, please cite:

SynthSeg: Segmentation of brain MRI scans of any contrast and resolution without retraining
B. Billot, D.N. Greve, O. Puonti, A. Thielscher, K. Van Leemput, B. Fischl, A.V. Dalca, J.E. Iglesias
Medical Image Analysis (2023)
[ article | arxiv | bibtex ]

If you have any question regarding the usage of this code, or any suggestions to improve it, please raise an issue or contact us at: [email protected]

synthseg's People

Contributors

bbillot avatar dependabot[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

synthseg's Issues

Running on Mac M1

Hi, has anyone tried to run this on a M1 Mac? I've followed various guides for installing tensorflow on M1 but I'm running into several problems including package compatibility... I can provide more details if needed.

synthseg fails on data with three slices

Hi,

first time ever trying mri_synthseg. Input is the mean volume of an EPI series. The volume has dimensions 64x64x3, so definitely 3D. SynthSeg complains the data is 2D.

base โฏ mri_synthseg --i mean.nii.gz --o seg.nii.gz
SynthSeg 2.0
using 1 thread
DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
predicting 1/1
Error: input should have 3 dimensions, had 2

difference with freesurfer installation

Hello Benjamin

thanks for sharing this great works !

I install freesurfer dev version and I could run the command mri_synthseg with the -robust option (nice improvment by the way)

But I would prefer running it from this github repo
but I can not find this option in this repository ... in SynthSeg/scripts/commands/ neither predict nor SynthSeg_predict have the -robust option

Thanks

SynthSeg fails on a constructed template T1w scan

Hi,

I tried to test SynthSeg on a constructed template T1w scan, and it gave me an undesired segmentation, like the following:
image

A DL-based deformable template creation model constructs the testing image, and the intensity is rescaled to the range [0,1]. So I also tried to restore the intensity to the original HU space and rerun the algorithm again, but it gave me a similar result.

Here is the command I used:
python SynthSeg_predict.py --i input_img --o output_img --robust

Any suggestion would be appreciated!

Is the WML segmentation model available?

Dear @BBillot,

Thanks for sharing the code.

I discovered the SynthSeg model applied to MS lesions but the model included in the repo corresponds to the referred main paper. Is the WML segmentation model available as well?

"ValueError: axes don't match array" in sample_segmentation_pairs_d.py

Thank you for sharing synthseg.

To train the denoiser in synthseg+, I'm working on sample_segmentation_pairs_d.py, however, it returns "ValueError: axes don't match array". Here is the way the error is reproduced.

Environment: requirements_python3.6.txt

  1. Prepare image/lanel dataset by running scripts/tutorials/1-generation_visualisation.py
cd Synthseg/scripts/tutorials/
python 1-generation_visualisation.py 
  1. Reproduce error by runinng following code.
from SynthSeg.sample_segmentation_pairs_d import sample_segmentation_pairs

image_dir = './outputs_tutorial_1/image/image.nii.gz'
labels_dir = './outputs_tutorial_1/labels/labels.nii.gz'
results_dir = './outputs_tutorial_1/sample_seg_pair'
n_examples = 1
path_model = "../../models/synthseg_1.0.h5"
segmentation_labels = "../../data/labels_classes_priors/generation_labels.npy"

if __name__ == "__main__":
    sample_segmentation_pairs(
        image_dir=image_dir,
        labels_dir=labels_dir,
        results_dir=results_dir,
        n_examples=n_examples,
        path_model=path_model,
        segmentation_labels=segmentation_labels,
        n_neutral_labels=18, # n_neutral_labels=None,
        batchsize=1,
        flipping=True,
        scaling_bounds=.15,
        rotation_bounds=15,
        shearing_bounds=.012,
        translation_bounds=False,
        nonlin_std=3.,
        nonlin_scale=.04,
        min_res=1.,
        max_res_iso=4.,
        max_res_aniso=8.,
        noise_std_lr=3.,
        blur_range=1.03,
        bias_field_std=.5,
        bias_scale=.025,
        noise_std=10,
        gamma_std=.5)
  1. This eventually return the error.
2023-01-28 11:42:52.894375: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2023-01-28 11:42:52.944030: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fee3e57bb80 executing computations on platform Host. Devices:
2023-01-28 11:42:52.944090: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): Host, Default Version
Using TensorFlow backend.
Traceback (most recent call last):
  File "7-test-degradation_error.py", line 37, in <module>
    gamma_std=.5)
  File "/Users/chiba/Github/SynthSeg/SynthSeg/sample_segmentation_pairs_d_error.py", line 205, in sample_segmentation_pairs
    generation_model.load_weights(path_model, by_name=True)
  File "/Users/chiba/anaconda3/envs/synthseg_env3.6/lib/python3.6/site-packages/keras/engine/saving.py", line 492, in load_wrapper
    return load_function(*args, **kwargs)
  File "/Users/chiba/anaconda3/envs/synthseg_env3.6/lib/python3.6/site-packages/keras/engine/network.py", line 1227, in load_weights
    reshape=reshape)
  File "/Users/chiba/anaconda3/envs/synthseg_env3.6/lib/python3.6/site-packages/keras/engine/saving.py", line 1294, in load_weights_from_hdf5_group_by_name
    reshape=reshape)
  File "/Users/chiba/anaconda3/envs/synthseg_env3.6/lib/python3.6/site-packages/keras/engine/saving.py", line 980, in preprocess_weights_for_loading
    weights[0] = np.transpose(weights[0], (3, 2, 0, 1))
  File "<__array_function__ internals>", line 6, in transpose
  File "/Users/chiba/anaconda3/envs/synthseg_env3.6/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 651, in transpose
    return _wrapfunc(a, 'transpose', axes)
  File "/Users/chiba/anaconda3/envs/synthseg_env3.6/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 61, in _wrapfunc
    return bound(*args, **kwds)
ValueError: axes don't match array

Issues with the tutorials

Hello,

I am trying to run the tutorials module to become familiar with the code. However, I have an error.
I am typing the command python 1-generation_visualisation.py and I have an error which is AttributeError: 'BrainGenerator' object has no attribute 'subjects_prob'.
Could you help me to understand the mistake ?

Thank you for your answer.

Dependency "pkg-resources==0.0.0" in requirements.txt breaks install in Docker/virtualenv

Hey,

thanks for sharing your work. I really liked your presentation at MIDL!

I just wanted to point out, that your requirements.txt file includes the pkg-resources==0.0.0 dependency (https://github.com/BBillot/SynthSeg/blob/master/requirements.txt#L38) which unfortunately breaks the install in a fresh environment or Docker file with the following error:

Could not find a version that satisfies the requirement pkg-resources==0.0.0 (from -r requirements.txt (line 5)) (from versions: )
No matching distribution found for pkg-resources==0.0.0 (from -r requirements.txt (line 38))

The package is actually only included with pip freeze due to a bug in ubuntu which is still not fixed (https://bugs.launchpad.net/ubuntu/+source/python-pip/+bug/1635463)

Removing it from the requirements file fixes the install problem.

Best,
Leonie

Crop only giving center of image

Hi I tried to use the patches inference with crop but it cropped the center and didnt provide the rest of the image. is there a way to do patch based inference and aggregate to get whole image?

Can't run on GPU with python 3.8

Hi!

I've been using SynthSeg for a few months now on python 3.6 and wanted to test on python 3.8. No problem at all with the python 3.6 configuration. However, after creating a new virtual python environment with the 'requirements_python38.txt' file and manually installing cudatoolkit=10.0 and cudnn=7.6, SynthSeg never detects my GPU resources, so it runs on CPU only.

I've been trying several different solutions but nothing ended up working.

The explicit steps I'm doing (in my terminal [I'm using Ubuntu 18.04) are the ones below:

  1. 'conda create -n synthseg38 python=3.8'
  2. 'conda activate synthseg38'
  3. 'pip install -r /path/to/SynthSeg/requirements_python3.8.txt'
  4. 'conda install -c conda-forge cudatoolkit=10.0 cudnn=7.6'

I get the following 'error' messages (still runs but on CPU) when I try to run SynthSeg with GPU:

"""
Using TensorFlow backend.

SynthSeg-robust 2.0

using 1 thread
2023-02-28 10:48:10.895447: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2023-02-28 10:48:11.182097: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-02-28 10:48:11.182307: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:
pciBusID: 0000:00:06.0 name: Tesla P100-PCIE-16GB computeCapability: 6.0
coreClock: 1.3285GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2023-02-28 10:48:11.182553: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory
2023-02-28 10:48:11.182648: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcublas.so.10'; dlerror: libcublas.so.10: cannot open shared object file: No such file or directory
2023-02-28 10:48:11.217937: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2023-02-28 10:48:11.218516: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2023-02-28 10:48:11.218643: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcusolver.so.10'; dlerror: libcusolver.so.10: cannot open shared object file: No such file or directory
2023-02-28 10:48:11.218721: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcusparse.so.10'; dlerror: libcusparse.so.10: cannot open shared object file: No such file or directory
2023-02-28 10:48:11.223702: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2023-02-28 10:48:11.223741: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1598] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2023-02-28 10:48:11.224138: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
2023-02-28 10:48:11.232404: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2095074999 Hz
2023-02-28 10:48:11.232595: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fe2f8000b20 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2023-02-28 10:48:11.232625: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2023-02-28 10:48:11.234544: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:
2023-02-28 10:48:11.234573: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]
predicting 1/1
"""

I am expecting this to be either an error in librairies compatibility or something missing from the requirements.txt file for python 3.8. I am not an expert at all in tensorflow/cuda packages, so sorry if this is irrelevant, but I would appreciate some guidances! :)

Input dimensions?

Hi! I can't get this to run. I tried on my data and got

lab2im/edit_volumes.py", line 369, in align_volume_to_ref
    ras_axes_flo[swapped_axis_idx], ras_axes_flo[i] = ras_axes_flo[i], ras_axes_flo[swapped_axis_idx]
ValueError: setting an array element with a sequence.

and then I tried on the training segmentation SynthSeg_predict.py data/training_label_maps/subject01_seg.nii.gz just to check if my image dimensions are the problem, and that gave

merge.py", line 362, in build
    'Got inputs shapes: %s' % (input_shape))
ValueError: A `Concatenate` layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 256, 64, 64, 96), (None, 192, 64, 64, 1920)]

? any ideas ?

Installation issues.

Hi,

I can't seem to install Synthseg by using
pip install -r requirements.txt

tried to create virtual envs with 3.6,3.7,3.8,3.9,3.10 for installation, but none worked.

It would be really kind of you if you could provide a workable instruction for installing Synthseg by specifying version requirements, etc in details?. Your help here would be much appreciated.

Transferable to non MRI modalities?

Hi, bit of a research question. What's your experience/thought on applying this gmm generative models to other modalities such as CT scans? Thanks!

Computing SD95

This isn't an issue; it's a question.

Given two labeled segmentations, given as 3D arrays of the same size, is there a simple way to compute the SD95 value from functions within the SynthSeg code base?

It looks like I could use evaluate.evaluation. But I'm not sure how to construct the numpy file needed for the path_hausdorff_95 argument.

Tensorflow and keras issues

Seems there is some incompatibility with newer versions of tensorflow, to avoid the endless loop of up and downgrading tensorflow and keras any chance the general codebase could implement the suggested solution of replacing the keras import with the tf.keras import?

keras-team/keras#14632

Fine-tuned model on monkey anatomy?

Hi!
Thank you for the very nice package ๐Ÿ˜Š
I don't know if this is the right place, but I was interested to know if anyone had tried the pre-trained models on monkey images, and if fine-tuning them on such data would yield interesting results.

Usage of Model or Training data

Hi, you added Apache 2.0 license to this repository. So, can we use model or training data that are present in this repository??

The volume calculation per segmentation is given in mmยณ right?

I didn't find any mention of the actual calculation of the volume size for each of the segmented part of the brain. I looked through the code and I believe it to be reported in mmยณ? Would be cool to have this info in the docstrings and the --help part of the script, as well as the readme.md.

Non-sequential label

Hello, can I change the label value of the SynthSeg segmentation result? For example, from the original {0,2,3,4,5,7,8...58,60} to the continuous {0,1,2,3...30,31}.

Best
Kim

Compatibility issue with requirements_3.8

Hi,

I tried to install the requirements for python3.8 [https://github.com/BBillot/SynthSeg/blob/master/requirements_python3.8.txt] using poetry. I added all packages to the required version, but when I try to install the poetry env, I get the following incompatibility error :

SolverProblemError
Because opencv-python (4.6.0.66) depends on numpy (>=1.21.2)
and database-handling depends on numpy (1.18.5), opencv-python is forbidden.
So, because database-handling depends on opencv-python (4.6.0.66), version solving failed.

It seems to be an incompatibility between two listed packages in the requirements, between opencv-python==4.6.0.66 and numpy==1.18.5. Should I update numpy to the first version compatible with opencv-python ?

Thanks for your help
Robin

How to split the specified areas?

Hi

How do I split the specified area, instead of splitting all the areas at once. For example, only split on 'right putamen' and 'left putamen'.
Thank you for your reply!

Best
Kim

FreeSurfer dataset

Hello,

The paper mentions a dataset with 39 T1 images with manual segmentations. FreeSurfer is cited as the source for the data, but I haven't been able to figure out where exactly to obtain it. I thought they might be somewhere in the 10GB FreeSurfer installation, and I have found a few T1 images in there but no matching segmentation maps.

Any tips on where to find this dataset?

Thanks,
Cory

Is there any PyTorch implementation?

Hi,

I've read your works about SynthSeg and really like them. I have a few ideas and want to check if they are working.
SynthSeg is implemented with TensorFlow and Keras, however, my mostly used deep learning framework is PyTorch, so it's hard for people like me to develop our own algorithms based on your repo.
I wonder if there is any PyTorch implementation for SynthSeg? If not, I believe I'll reproduce it in PyTorch.

Thanks

Recommended GPU

What GPU could you recommend?
I tried it with RTX 3090. SynthSeg takes up all the GPU memory and freezes.
Probably has something to do with the CUDNN verion.
RTX 2090 was fine, but an out-of-memory error occured sometimes.

Suggestion for code improvement

https://github.com/BBillot/SynthSeg/blob/master/ext/lab2im/utils.py#L245-L260

neutral = list(set(label_list) & set(neutral_FS_labels))
left = list(
    label_list[
        ((label_list > 0) & (label_list < 14))
        | ((label_list > 16) & (label_list < 21))
        | ((label_list > 24) & (label_list < 40))
        | ((label_list > 135) & (label_list < 138))
        | ((label_list > 20100) & (label_list < 20110))
    ]
)
right = list(
    label_list[
        ((label_list > 39) & (label_list < 72))
        | ((label_list > 162) & (label_list < 165))
        | ((label_list > 20000) & (label_list < 20010))
    ]
)

missing_labels = set.difference(set(label_list), set(neutral + left + right))

if missing_labels:
    raise Exception(
        "labels {} not in our current FS classification, "
        "please update get_list_labels in utils.py".format(missing_labels)
    )

@BBillot I am happy to create a PR if you prefer that way

"path_segmentation_labels_s1" isn't used in function "training_s1"

Thank you for sharing the fantastic code!

I think that the "path_segmentation_labels_s1" would be used in function "training_s1" though it has None.
segmentation_labels=None

path_segmentation_labels_s1 = '../../data/tutorial_7/segmentation_labels_s1.npy'
model_dir_s1 = './outputs_tutorial_7/training_s1' # folder where the models will be saved
training_s1(labels_dir=labels_dir_s1,
model_dir=model_dir_s1,
generation_labels=path_generation_labels,
segmentation_labels=None,
n_neutral_labels=18,
generation_classes=path_generation_classes,
target_res=1,
output_shape=160,
prior_distributions='uniform',
prior_means=[0, 255],
prior_stds=[0, 50],
randomise_res=True)

While the function "training_s2" seems to use "path_segmentation_labels_s2"
segmentation_labels=path_segmentation_labels_s2

training_s2(labels_dir=labels_dir_s2,
model_dir=model_dir_s2,
generation_labels=path_generation_labels,
n_neutral_labels=18,
segmentation_labels=path_segmentation_labels_s2,
generation_classes=path_generation_classes,
grouping_labels=grouping_labels,
target_res=1,
output_shape=160,
prior_distributions='uniform',
prior_means=[0, 255],
prior_stds=[0, 50],
randomise_res=True)

ValueError: axes don't match array in prediction of S1 unit at 7-synthseg+.py

Thank you for sharing synthseg.

To predict by S1 unit in synthseg+, I'm working on 7-synthseg+.py for training S1 and on 4-predict.py for prediction,
however, it returns "ValueError: axes don't match array". Here is the way the error is reproduced.

Environment: requirements_python3.6.txt

  1. Prepare image/lanel dataset by running scripts/tutorials/1-generation_visualisation.py
cd Synthseg/scripts/tutorials/
python 1-generation_visualisation.py 
  1. Train the S1 unit by the part of 7-synthseg+.py as below. This ran and saved the model weight as "scripts/tutorials/outputs_tutorial_7/training_s1/dice_001.h5".
from SynthSeg.training import training as training_s1

# ------------------ segmenter S1
labels_dir_s1 = '../../data/training_label_maps'
path_generation_labels = '../../data/labels_classes_priors/generation_labels.npy'
path_generation_classes = '../../data/labels_classes_priors/generation_classes.npy'
path_segmentation_labels_s1 = '../../data/tutorial_7/segmentation_labels_s1.npy'
model_dir_s1 = './outputs_tutorial_7/training_s1' 

training_s1(labels_dir=labels_dir_s1,
            model_dir=model_dir_s1,
            generation_labels=path_generation_labels,
            segmentation_labels=None,
            n_neutral_labels=18,
            generation_classes=path_generation_classes,
            target_res=1,
            output_shape=160,
            prior_distributions='uniform',
            prior_means=[0, 255],
            prior_stds=[0, 50],
            randomise_res=True,
            dice_epochs=1,
            steps_per_epoch=1000,)
  1. Predict with trained S1 model by following code based on 4-predict.py
from SynthSeg.predict import predict

path_images = './outputs_tutorial_1/image.nii.gz'
path_segm = './outputs_tutorial_7/predicted_segmentations-S1'
path_posteriors = './outputs_tutorial_7/predicted_segmentations-S1'
path_vol = './outputs_tutorial_7/predicted_information/volumes.csv'
path_model = './outputs_tutorial_7/training_s1/dice_001.h5'
path_segmentation_labels = '../../data/tutorial_7/segmentation_labels_s1.npy'
path_segmentation_names = '../../data/labels_classes_priors/synthseg_segmentation_names.npy'
cropping = 192
target_res = 1.
path_resampled = './outputs_tutorial_7/predicted_information'
flip = True
n_neutral_labels = 18
sigma_smoothing = 0.5
topology_classes = '../../data/labels_classes_priors/synthseg_topological_classes.npy'
keep_biggest_component = True
n_levels = 5
nb_conv_per_level = 2
conv_size = 3
unet_feat_count = 24
activation = 'elu'
feat_multiplier = 2
gt_folder = None
compute_distances = True

predict(path_images,
        path_segm,
        path_model,
        path_segmentation_labels,
        n_neutral_labels=n_neutral_labels,
        path_posteriors=path_posteriors,
        path_resampled=path_resampled,
        path_volumes=path_vol,
        names_segmentation=path_segmentation_names,
        cropping=cropping,
        target_res=target_res,
        flip=flip,
        topology_classes=topology_classes,
        sigma_smoothing=sigma_smoothing,
        keep_biggest_component=keep_biggest_component,
        n_levels=n_levels,
        nb_conv_per_level=nb_conv_per_level,
        conv_size=conv_size,
        unet_feat_count=unet_feat_count,
        feat_multiplier=feat_multiplier,
        activation=activation,
        gt_folder=gt_folder,
        compute_distances=compute_distances)
  1. This eventually returns the following "ValueError: axes don't match array" error.
Traceback (most recent call last):
  File "4-prediction-S1.py", line 137, in <module>
    predict(path_images,
  File "/home/ubuntu/SynthSeg/SynthSeg/predict.py", line 157, in predict
    net = build_model(path_model=path_model,
  File "/home/ubuntu/SynthSeg/SynthSeg/predict.py", line 467, in build_model
    net.load_weights(path_model, by_name=True)
  File "/home/ubuntu/SynthSeg/synthseg_env/lib/python3.8/site-packages/keras/engine/saving.py", line 492, in load_wrapper
    return load_function(*args, **kwargs)
  File "/home/ubuntu/SynthSeg/synthseg_env/lib/python3.8/site-packages/keras/engine/network.py", line 1225, in load_weights
    saving.load_weights_from_hdf5_group_by_name(
  File "/home/ubuntu/SynthSeg/synthseg_env/lib/python3.8/site-packages/keras/engine/saving.py", line 1289, in load_weights_from_hdf5_group_by_name
    weight_values = preprocess_weights_for_loading(
  File "/home/ubuntu/SynthSeg/synthseg_env/lib/python3.8/site-packages/keras/engine/saving.py", line 980, in preprocess_weights_for_loading
    weights[0] = np.transpose(weights[0], (3, 2, 0, 1))
  File "<__array_function__ internals>", line 5, in transpose
  File "/home/ubuntu/SynthSeg/synthseg_env/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 651, in transpose
    return _wrapfunc(a, 'transpose', axes)
  File "/home/ubuntu/SynthSeg/synthseg_env/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 61, in _wrapfunc
    return bound(*args, **kwds)
ValueError: axes don't match array

Identical transformations for T1 and T2 images?

This is a question, not an 'issue'. When I create a T1 and T2 image as below:

        im, lab = brain_generator.generate_brain()
        t1_im = im[:, :, :, 0]
        t2_im = im[:, :, :, 1]

are the T1, T2, and labeled images consistent with each other? For example, are they all positioned the same? Or could the T1 and T2 differ slightly as when a person moves their head in the scanner in the middle of acquisition?

Can I use/modify this software?

I see no explicit license, but I assume that you folks put this software up here so that others can use it. Would you mind adding a license? Personally, I would be excited to see this software available under an OSI approved open-source license, but that's your choice, of course. Thanks!

Matlab input folder name can't be too short

First off, great idea! I'm using this to generate brain masks for ADC maps.

The issue comes from line 39 in SynthSeg.m:
if strcmp(input(end-3:end), '.nii')>0 || strcmp(input(end-6:end), '.nii.gz')>0

My folder is named 'T1w', so I got this error: Array indices must be positive integers or logical values.

Changing the folder name to 'T1_weighted' solved the issue.

Also, the output is saving as .nii.nii. Probably another easy fix there.

Q about generation_labels.npy

I'm trying to run SynthSeg_training.py on my own NIFTI files. Do I leave generation_labels.npy as is, or do I need to modify it for my own data.

[This is a question, not an issue. If there is a discussion forum for questions like this, please let me know.]

Input and output dimension mismatch

Hi, I have an input data of size 256x256x128 and tried to generate segmentation results as per the commands given in the Readme file.
When I ran the code, output size is 240x240x192 and there is mismatch between input and output size.

Here is the command i used for generating results
python ./scripts/commands/SynthSeg_predict.py --i ./data/inputs/2/anat.nii --o ./data/outputs/anat2_seg.nii

Here is link for the input data and looks like it's a raw data without any pre-processing step
https://escholarship.umassmed.edu/cs_schizbull08/

Is dimension mismatch comes only with raw data?? In this case, how can we overlay output segmentation maps on data ??

Can you please let me know if i missed anything here..

Thanks..

High resolution T1w image segmentation seems

Good morning!

I used SynthSeg 2.0 to segment a T1w image acquired at 7T.
While the overall segmentation is generally ok, some areas are rather poorly segmented (see the left and right superior frontal gyrus in the screenshot below for instance).

Screenshot from 2023-05-11 11-58-05

Do you have any recommandation as to how to improve these results?
Do you think that fine-tuning the current model with 7T data is necessary, or is there another way?
Thanks in advance for your help, and thank you for the very nice model and package!

Q about numpy files created by evaluation method

This is a question, not an issue.

I understand that the evaluation method in evaluate.py saves a number of numpy files:

  • dice. npy
  • hausdorff.npy
  • hausdorff_95.npy

and so on. I understand that these are two-dimensional arrays where the rows correspond to segmentation files and the columns correspond to the region numbers for anatomical regions.

How can I determine the row names (i.e., the segmentation files) and the column names (i.e., the region IDs)? I want to load these numpy files and do analysis using them by creating perhaps Pandas data frames or something similar.

Are the implicit row and column headers constant across each of these files (for a given call to evaluation)?

I'm guessing these are defined by the values of label_list and path_segs, but I wanted to be sure.

Can`t run the command python ./scripts/commands/SynthSeg_predict.py --i <image> --o <segmentation> --post <post> --resample <resample> --vol <vol>

Hi! Thank you for this code. However, I want to test SynthSeg on some data and I tried using the command line (from cmd): python ./scripts/commands/SynthSeg_predict.py --i --o --post --resample --vol with corresponding image and segmentation path, but it won`t work.
What might be wrong? Or is there any other way to run all those scripts (from Console in PyCharm for example)?

Git LFS for the model file

Good Day @BBillot ,

Thank you for releasing the source code and model for SynthSeg.
I am interested to try it out, but experience difficulties getting the model SynthSeg.h5 file that you have provided.

When I perform a git lfs pull, I am told that...

batch_response: Git LFS is disabled for this repository
error: failed to fetch some objects from 'https://github.com/BBillot/SynthSeg.git/info/lfs'

A solution to this would be greatly appreciated!

Thank you very much!

ValueError: No such layer: labels_out

I'm getting the following error when running SynthSeg.training.training:

Traceback (most recent call last):
  File "/home/miran045/reine097/PycharmProjects/SynthSeg/scripts/SynthSeg_scripts/SynthSeg_training.py", line 44, in <module>
    model_dir=path_model_dir)
  File "/home/miran045/reine097/PycharmProjects/SynthSeg/SynthSeg/training.py", line 263, in training
    wl2_model = metrics.metrics_model(wl2_model, segmentation_labels, 'wl2')
  File "/home/miran045/reine097/PycharmProjects/SynthSeg/SynthSeg/metrics_model.py", line 21, in metrics_model
    labels_gt = input_model.get_layer('labels_out').output
  File "/home/miran045/reine097/.conda/envs/synthseg_msi/lib/python3.7/site-packages/keras/engine/network.py", line 365, in get_layer
    raise ValueError('No such layer: ' + name)
ValueError: No such layer: labels_out

I'm using version 2.3.1 of keras. I use the default arguments for all but the first two arguments of SynthSeg.training.training:

training(labels_dir=path_training_label_maps,
         model_dir=path_model_dir)

path_training_label_maps points to a directory containing NIFTI files with FreeSurfer segmentations.

request: docker container

Would be great to have v2.0 available in a docker or singularity container. Currently I could only find this from over a year ago.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.