Giter VIP home page Giter VIP logo

andreped / livermask Goto Github PK

View Code? Open in Web Editor NEW
62.0 6.0 11.0 6.05 MB

💥 Command line tool for automatic liver parenchyma and liver vessel segmentation in CT using a pretrained deep learning model

Home Page: https://pypi.org/project/livermask/

License: MIT License

Python 92.12% Dockerfile 7.88%
livermask unet segmentation pretrained-models deep-learning liver liver-segmentation command-line-tool free-to-use open

livermask's Introduction

title colorFrom colorTo sdk app_port emoji pinned license app_file
livermask: Automatic Liver Parenchyma and vessel segmentation in CT
indigo
indigo
docker
7860
🔎
false
mit
demo/app.py

livermask

Automatic liver parenchyma and vessel segmentation in CT using deep learning

license Build Actions Status DOI GitHub Downloads Pip Downloads

livermask was developed by SINTEF Medical Technology to provide an open tool to accelerate research.

Demo

An online version of the tool has been made openly available at Hugging Face spaces, to enable researchers to easily test the software on their own data without downloading it. To access it, click on the badge above.

Install

A stable release is available on PyPI:

pip install livermask

Alternatively, to install from source do:

pip install git+https://github.com/andreped/livermask.git

As TensorFlow 2.4 only supports Python 3.6-3.8, so does livermask. Software is also compatible with Anaconda. However, best way of installing livermask is using pip, which also works for conda environments.

(Optional) To add GPU inference support for liver vessel segmentation (which uses Chainer and CuPy), you need to install CuPy. This can be easily done by adding cupy-cudaX, where X is the CUDA version you have installed, for instance cupy-cuda110 for CUDA-11.0:

pip install cupy-cuda110

Program has been tested using Python 3.7 on Windows, macOS, and Ubuntu Linux 20.04.

Usage

livermask --input path-to-input --output path-to-output
command description
--input the full path to the input data. Could be nifti file or directory (if directory is provided as input)
--output the full path to the output data. Could be either output name or directory (if directory is provided as input)
--cpu to disable the GPU (force computations on CPU only)
--verbose to enable verbose
--vessels to segment vessels
--extension which extension to save output in (default: .nii)

Using code directly

If you wish to use the code directly (not as a CLI and without installing), you can run this command:

python -m livermask.livermask --input path-to-input --output path-to-output

DICOM/NIfTI format

Pipeline assumes input is in the NIfTI format, and output a binary volume in the same format (.nii or .nii.gz). DICOM can be converted to NIfTI using the CLI dcm2niix, as such:

dcm2niix -s y -m y -d 1 "path_to_CT_folder" "output_name"

Note that "-d 1" assumed that "path_to_CT_folder" is the folder just before the set of DICOM scans you want to import and convert. This can be removed if you want to convert multiple ones at the same time. It is possible to set "." for "output_name", which in theory should output a file with the same name as the DICOM folder, but that doesn't seem to happen...

Troubleshooting

You might have issues downloading the model when using VPN. If any issues are observed, try to disable VPN and try again.

If the program struggles to install, attempt to install using:

pip install --force-reinstall --no-deps git+https://github.com/andreped/livermask.git

If you experience issues with numpy after installing CuPy, try reinstalling CuPy with this extension:

pip install 'cupy-cuda110>=7.7.0,<8.0.0'

Applications of livermask

Segmentation performance metrics

The segmentation models were evaluated on an internal dataset against manual annotations. See Table E in S4 Appendix in the Supporting Information of this paper for more information. The table presented there can also be seen below:

Class DSC HD95
Parenchyma 0.946±0.046 10.122±11.032
Vessels 0.355±0.090 24.872±5.161

The parenchyma segmentation model was trained on the LITS dataset, whereas the vessel model was trained on a local dataset (Oslo-CoMet). The LITS dataset is openly accessible and can be downloaded from here.

The Oslo-CoMet included 60 patients, of which 11 representative patients were used as hold out sample for the performance metrics assessment.

Acknowledgements

If you found this tool helpful in your research, please, consider citing it (see here for more information on how to cite):

@software{andre_pedersen_2023_7574587,
  author       = {André Pedersen and Javier Pérez de Frutos},
  title        = {andreped/livermask: v1.4.1},
  month        = jan,
  year         = 2023,
  publisher    = {Zenodo},
  version      = {v1.4.1},
  doi          = {10.5281/zenodo.7574587},
  url          = {https://doi.org/10.5281/zenodo.7574587}
}

In addition, the segmentation performance of the tool was presented in this paper, thus, cite this tool as well if that is of relevance for you study:

@article{perezdefrutos2022ddmr,
    title = {Learning deep abdominal CT registration through adaptive loss weighting and synthetic data generation},
    author = {Pérez de Frutos, Javier AND Pedersen, André AND Pelanis, Egidijus AND Bouget, David AND Survarachakan, Shanmugapriya AND Langø, Thomas AND Elle, Ole-Jakob AND Lindseth, Frank},
    journal = {PLOS ONE},
    publisher = {Public Library of Science},
    year = {2023},
    month = {02},
    volume = {18},
    doi = {10.1371/journal.pone.0282110},
    url = {https://doi.org/10.1371/journal.pone.0282110},
    pages = {1-14},
    number = {2}
}

livermask's People

Contributors

andreped avatar dependabot[bot] avatar jpdefrutos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

livermask's Issues

Model downloads fail

Recent CI builds failed as it seems like gdown was unable to download the models from Google Drive.

I have checked the drive, and the models are there and available.

Hence, probably something wrong with the recent version of gdown. Should try to downgrade gdown and test again.

Fine tuning on new images - input shape?

Hello again Andre,
I'd like to fine tune your network on MRIs - to see if I can get any good result on liver segmentation on MRI instead of CT.

Can I ask you how the data generator should be organized?
In particular, I am having some issues with input shape of the image and of the mask.
I can see the input image for prediction should be like (1,1,512,512,1), so I guess it's (batchsize, 1 (slice), 512,512, 1 channel)?
It would be super if you could share an example of how the input data is passed to the net.

Thank you very much and have a good weekend,
Silvia

MissingSchema issue

Traceback (most recent call last):
File "C:\Users\Cyanb\anaconda3\envs\spade\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "C:\Users\Cyanb\anaconda3\envs\spade\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Cyanb\anaconda3\envs\spade\lib\site-packages\livermask\livermask.py", line 116, in
main()
File "C:\Users\Cyanb\anaconda3\envs\spade\lib\site-packages\livermask\livermask.py", line 112, in main
func(*vars(ret).values())
File "C:\Users\Cyanb\anaconda3\envs\spade\lib\site-packages\livermask\livermask.py", line 40, in func
get_model(name)
File "C:\Users\Cyanb\anaconda3\envs\spade\lib\site-packages\livermask\utils\utils.py", line 10, in get_model
gdown.cached_download(url, output, md5=md5) #, postprocess=gdown.extractall)
File "C:\Users\Cyanb\anaconda3\envs\spade\lib\site-packages\gdown\cached_download.py", line 123, in cached_download
download(url, temp_path, quiet=quiet, proxy=proxy, speed=speed)
File "C:\Users\Cyanb\anaconda3\envs\spade\lib\site-packages\gdown\download.py", line 114, in download
res = sess.get(url, headers=headers, stream=True)
File "C:\Users\Cyanb\anaconda3\envs\spade\lib\site-packages\requests\sessions.py", line 555, in get
return self.request('GET', url, **kwargs)
File "C:\Users\Cyanb\anaconda3\envs\spade\lib\site-packages\requests\sessions.py", line 528, in request
prep = self.prepare_request(req)
File "C:\Users\Cyanb\anaconda3\envs\spade\lib\site-packages\requests\sessions.py", line 466, in prepare_request
hooks=merge_hooks(request.hooks, self.hooks),
File "C:\Users\Cyanb\anaconda3\envs\spade\lib\site-packages\requests\models.py", line 316, in prepare
self.prepare_url(url, params)
File "C:\Users\Cyanb\anaconda3\envs\spade\lib\site-packages\requests\models.py", line 390, in prepare_url
raise MissingSchema(error)
requests.exceptions.MissingSchema: Invalid URL '': No schema supplied. Perhaps you meant http://?

Did anyone have the same issue as me?

Verbose

I apologize in advance for a possibly stupid question. But what does the --verbose flag do? I used it but didn't understand what changed

Are there performance metrics for the model?

Describe the solution you'd like
Would just like a section in the ReadMe about how well this model performs in segmenting liver parenchyma and vessels. Will be useful in integrating it into scientific papers.

Dependencies collision

When installing the latest version of livermask on macOS 12:

pip install livermask==1.4.0

from the installation verbose we observe:

> ERROR: livermask 1.4.0 has requirement importlib-metadata==4.8.1, but you'll have importlib-metadata 5.2.0 which is incompatible.
> ERROR: livermask 1.4.0 has requirement Werkzeug==2.0.1, but you'll have werkzeug 2.2.2 which is incompatible.

It still manages to install, but this is suboptimal. Should find a way to avoid this from happening.

GitHub Releases HTTP download issue

When running livermask on a server solution, I suddenly got this issue, which seems to be sporadic:

> requests.exceptions.HTTPError: 503 Server Error: Egress is over the account limit.

It seems to fail when downloading the parenchyma model, which is downloaded from the GitHub Release tag.

Might be that we have to setup our own self-hosted server solution for these models instead.

Model no longer accessible

Seems like the model can no longer be downloaded. I might have deleted it when accidentally when cleaning my google drive.

Segmentation is not performed - possible typo in Livermask.py?

Hello Andre, and thank you very much for sharing your code!
I have been trying to install and run your code on my PC, but unfortunately nothing happens after the first TQDM progress bar (no log information are shown, and even if everything seems to work fine, no segmentation is produced).

I have a doubt:

Line 73 of "livermask.py"
if not curr.endswith(".ini"):
continue

Could this be a typo (".ini" instead of ".nii")?
Maybe my files are all skipped and that's why I don't get a segmentation nor a warning.

Many thanks!
Silvia

Error when installing livermask in windows 10

I get the following errors when installing livermask wih "pip install git+https://github.com/andreped/livermask.git":
Downloading llvmlite-0.36.0-cp38-cp38-win_amd64.whl (16.0 MB)
|███████████████████████ | 11.5 MB ... ERROR: Exception:
Traceback (most recent call last):
File "c:\program files\python38\lib\site-packages\pip_vendor\urllib3\response.py", line 438, in _error_catcher
yield
File "c:\program files\python38\lib\site-packages\pip_vendor\urllib3\response.py", line 519, in read
data = self._fp.read(amt) if not fp_closed else b""
File "c:\program files\python38\lib\site-packages\pip_vendor\cachecontrol\filewrapper.py", line 62, in read
data = self.__fp.read(amt)
File "c:\program files\python38\lib\http\client.py", line 455, in read
n = self.readinto(b)
File "c:\program files\python38\lib\http\client.py", line 499, in readinto
n = self.fp.readinto(b)
File "c:\program files\python38\lib\socket.py", line 669, in readinto
return self._sock.recv_into(b)
File "c:\program files\python38\lib\ssl.py", line 1241, in recv_into
return self.read(nbytes, buffer)
File "c:\program files\python38\lib\ssl.py", line 1099, in read
return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "c:\program files\python38\lib\site-packages\pip_internal\cli\base_command.py", line 180, in _main
status = self.run(options, args)
File "c:\program files\python38\lib\site-packages\pip_internal\cli\req_command.py", line 204, in wrapper
return func(self, options, args)
File "c:\program files\python38\lib\site-packages\pip_internal\commands\install.py", line 318, in run
requirement_set = resolver.resolve(
File "c:\program files\python38\lib\site-packages\pip_internal\resolution\resolvelib\resolver.py", line 127, in resolve
result = self._result = resolver.resolve(
File "c:\program files\python38\lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 473, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "c:\program files\python38\lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 367, in resolve
failure_causes = self._attempt_to_pin_criterion(name)
File "c:\program files\python38\lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 213, in _attempt_to_pin_criterion
criteria = self._get_criteria_to_update(candidate)
File "c:\program files\python38\lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 203, in _get_criteria_to_update
name, crit = self._merge_into_criterion(r, parent=candidate)
File "c:\program files\python38\lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 172, in _merge_into_criterion
if not criterion.candidates:
File "c:\program files\python38\lib\site-packages\pip_vendor\resolvelib\structs.py", line 139, in bool
return bool(self._sequence)
File "c:\program files\python38\lib\site-packages\pip_internal\resolution\resolvelib\found_candidates.py", line 143, in bool
return any(self)
File "c:\program files\python38\lib\site-packages\pip_internal\resolution\resolvelib\found_candidates.py", line 129, in
return (c for c in iterator if id(c) not in self._incompatible_ids)
File "c:\program files\python38\lib\site-packages\pip_internal\resolution\resolvelib\found_candidates.py", line 33, in _iter_built
candidate = func()
File "c:\program files\python38\lib\site-packages\pip_internal\resolution\resolvelib\factory.py", line 200, in _make_candidate_from_link
self._link_candidate_cache[link] = LinkCandidate(
File "c:\program files\python38\lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 306, in init
super().init(
File "c:\program files\python38\lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 151, in init
self.dist = self._prepare()
File "c:\program files\python38\lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 234, in _prepare
dist = self._prepare_distribution()
File "c:\program files\python38\lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 317, in _prepare_distribution
return self._factory.preparer.prepare_linked_requirement(
File "c:\program files\python38\lib\site-packages\pip_internal\operations\prepare.py", line 508, in prepare_linked_requirement
return self._prepare_linked_requirement(req, parallel_builds)
File "c:\program files\python38\lib\site-packages\pip_internal\operations\prepare.py", line 550, in _prepare_linked_requirement
local_file = unpack_url(
File "c:\program files\python38\lib\site-packages\pip_internal\operations\prepare.py", line 239, in unpack_url
file = get_http_url(
File "c:\program files\python38\lib\site-packages\pip_internal\operations\prepare.py", line 102, in get_http_url
from_path, content_type = download(link, temp_dir.path)
File "c:\program files\python38\lib\site-packages\pip_internal\network\download.py", line 157, in call
for chunk in chunks:
File "c:\program files\python38\lib\site-packages\pip_internal\cli\progress_bars.py", line 152, in iter
for x in it:
File "c:\program files\python38\lib\site-packages\pip_internal\network\utils.py", line 62, in response_chunks
for chunk in response.raw.stream(
File "c:\program files\python38\lib\site-packages\pip_vendor\urllib3\response.py", line 576, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "c:\program files\python38\lib\site-packages\pip_vendor\urllib3\response.py", line 541, in read
raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
File "c:\program files\python38\lib\contextlib.py", line 131, in exit
self.gen.throw(type, value, traceback)
File "c:\program files\python38\lib\site-packages\pip_vendor\urllib3\response.py", line 443, in _error_catcher
raise ReadTimeoutError(self._pool, None, "Read timed out.")
pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out.

@andreped can you help?

ImportError: numpy.core.multiarray failed to import

Hello,

I want to use livermask to segment CT images as a part of a project through my organization. We are really excited to use your tool because it will automate a lot of the segmentation we would have otherwise had to do. However, I stumbled on the following error:
RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd
RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd
Traceback (most recent call last):
File "/Users/giulia/.conda/envs/liver/bin/livermask", line 33, in
sys.exit(load_entry_point('livermask==1.3.0', 'console_scripts', 'livermask')())
File "/Users/giulia/.conda/envs/liver/bin/livermask", line 25, in importlib_load_entry_point
return next(matches).load()
File "/Users/giulia/.conda/envs/liver/lib/python3.8/importlib/metadata.py", line 77, in load
module = import_module(match.group('module'))
File "/Users/giulia/.conda/envs/liver/lib/python3.8/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1014, in _gcd_import
File "", line 991, in _find_and_load
File "", line 975, in _find_and_load_unlocked
File "", line 671, in _load_unlocked
File "", line 783, in exec_module
File "", line 219, in _call_with_frames_removed
File "/Users/giulia/.conda/envs/liver/lib/python3.8/site-packages/livermask-1.3.0-py3.8.egg/livermask/livermask.py", line 6, in
from scipy.ndimage import zoom
File "/Users/giulia/.conda/envs/liver/lib/python3.8/site-packages/scipy-1.5.4-py3.8-macosx-11.0-arm64.egg/scipy/ndimage/init.py", line 151, in
from .filters import *
File "/Users/giulia/.conda/envs/liver/lib/python3.8/site-packages/scipy-1.5.4-py3.8-macosx-11.0-arm64.egg/scipy/ndimage/filters.py", line 37, in
from . import _nd_image
ImportError: numpy.core.multiarray failed to import

I have installed numpy==1.19.5 as requested by livermask and I work on a Macos M1 machine.
Could you please help me?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.