Giter VIP home page Giter VIP logo

pranet's Introduction

PraNet: Parallel Reverse Attention Network for Polyp Segmentation (MICCAI2020-Oral)

Authors: Deng-Ping Fan, Ge-Peng Ji, Tao Zhou, Geng Chen, Huazhu Fu, Jianbing Shen, and Ling Shao.


1. Preface

  • This repository provides code for "PraNet: Parallel Reverse Attention Network for Polyp Segmentation" MICCAI-2020. (paper | 中文版)

  • If you have any questions about our paper, feel free to contact me. And if you are using PraNet or evaluation toolbox for your research, please cite this paper (BibTeX).

1.1. 🔥 NEWS 🔥

  • [2022/11/26] Our PraNet has been developed on Huawei Ascend platform, where the project could be found at Gitee and CSDN introduction.

  • [2022/03/27] 💥 We release a new large-scale dataset on Video Polyp Segmentation (VPS) task, please enjoy it. ProjectLink/ PDF.

  • [2021/12/26] 💥 PraNet模型在Jittor Developer Conference 2021中荣获「最具影响力计图论文(应用)奖」

  • [2021/09/07] The Jittor convertion of PraNet (inference code) is available right now. It has robust inference efficiency compared to PyTorch version, please enjoy it. Many thanks to Yu-Cheng Chou for the excellent conversion from pytorch framework.

  • [2021/09/05] The Tensorflow (Keras) implementation of PraNet (ResNet50/MobileNetV2 version) is released in github-link. Thanks Tauhid Khan.

  • [2021/08/18] Improved version (PraNet-V2) has been released: https://github.com/DengPingFan/Polyp-PVT.

  • [2021/04/23] We update the results on four Camouflaged Object Detection (COD) testing dataset (i.e., COD10K, NC4K, CAMO, and CHAMELEON) of our PraNet, which is the retained on COD dataset from scratch. Download links at google drive are avaliable here: result, model weight, evaluation results.

  • [2021/01/21] 💥 Our PraNet has been used as the base segmentation model of Prof. Michael I. Jordan et al's recent work (Distribution-Free, Risk-Controlling Prediction Sets, Journal of the ACM 2021).

  • [2021/01/10] 💥 Our PraNet achieved the Top-1 ranking on the camouflaged object detection task (link).

  • [2020/09/18] Upload the pre-computed maps.

  • [2020/05/28] Upload pre-trained weights.

  • [2020/06/24] Release training/testing code.

  • [2020/03/24] Create repository.

1.2. Table of Contents

Table of contents generated with markdown-toc

1.3. State-of-the-art Approaches

  1. "Selective feature aggregation network with area-boundary constraints for polyp segmentation." IEEE Transactions on Medical Imaging, 2019. paper link: https://link.springer.com/chapter/10.1007/978-3-030-32239-7_34
  2. "PraNet: Parallel Reverse Attention Network for Polyp Segmentation" IEEE Transactions on Medical Imaging, 2020. paper link: https://link.springer.com/chapter/10.1007%2F978-3-030-59725-2_26
  3. "Hardnet-mseg: A simple encoder-decoder polyp segmentation neural network that achieves over 0.9 mean dice and 86 fps" arXiv, 2021 paper link: https://arxiv.org/pdf/2101.07172.pdf
  4. "TransFuse: Fusing Transformers and CNNs for Medical Image Segmentation" arXiv, 2021. paper link: https://arxiv.org/pdf/2102.08005.pdf
  5. "Automatic Polyp Segmentation via Multi-scale Subtraction Network" MICCAI, 2021. paper link: https://arxiv.org/pdf/2108.05082.pdf
  6. "CCBANet: Cascading Context and Balancing Attention for Polyp Segmentation" MICCAI, 2021. paper link: https://link.springer.com/book/10.1007/978-3-030-87193-2?noAccess=true
  7. "Double Encoder-Decoder Networks for Gastrointestinal Polyp Segmentation" MICCAI, 2021. paper link: https://arxiv.org/pdf/2110.01939.pdf
  8. "HRENet: A Hard Region Enhancement Network for Polyp Segmentation" MICCAI, 2021. paper link: https://link.springer.com/book/10.1007/978-3-030-87193-2?noAccess=true
  9. "Learnable Oriented-Derivative Network for Polyp Segmentation" MICCAI, 2021. paper link: https://link.springer.com/book/10.1007/978-3-030-87193-2?noAccess=true
  10. "Shallow attention network for polyp segmentation" MICCAI, 2021. paper link: https://arxiv.org/pdf/2108.00882.pdf

The latest trends in image-/video-based polyp segmentation refer to AWESOME_VPS.md.

2. Overview

2.1. Introduction

Colonoscopy is an effective technique for detecting colorectal polyps, which are highly related to colorectal cancer. In clinical practice, segmenting polyps from colonoscopy images is of great importance since it provides valuable information for diagnosis and surgery. However, accurate polyp segmentation is a challenging task, for two major reasons: (i) the same type of polyps has a diversity of size, color and texture; and (ii) the boundary between a polyp and its surrounding mucosa is not sharp.

To address these challenges, we propose a parallel reverse attention network (PraNet) for accurate polyp segmentation in colonoscopy images. Specifically, we first aggregate the features in high-level layers using a parallel partial decoder (PPD). Based on the combined feature, we then generate a global map as the initial guidance area for the following components. In addition, we mine the boundary cues using a reverse attention (RA) module, which is able to establish the relationship between areas and boundary cues. Thanks to the recurrent cooperation mechanism between areas and boundaries, our PraNet is capable of calibrating any misaligned predictions, improving the segmentation accuracy.

Quantitative and qualitative evaluations on five challenging datasets across six metrics show that our PraNet improves the segmentation accuracy significantly, and presents a number of advantages in terms of generalizability, and real-time segmentation efficiency (∼50fps).

2.2. Framework Overview


Figure 1: Overview of the proposed PraNet, which consists of three reverse attention modules with a parallel partial decoder connection. See § 2 in the paper for details.

2.3. Qualitative Results


Figure 2: Qualitative Results.

3. Proposed Baseline

3.1. Training/Testing

The training and testing experiments are conducted using PyTorch with a single GeForce RTX TITAN GPU of 24 GB Memory.

Note that our model also supports low memory GPU, which means you can lower the batch size

  1. Configuring your environment (Prerequisites):

    Note that PraNet is only tested on Ubuntu OS with the following environments. It may work on other operating systems as well but we do not guarantee that it will.

    • Creating a virtual environment in terminal: conda create -n PraNet python=3.6.

    • Installing necessary packages: PyTorch 1.1

  2. Downloading necessary data:

    • downloading testing dataset and move it into ./data/TestDataset/, which can be found in this Google Drive Link (327.2MB). It contains five sub-datsets: CVC-300 (60 test samples), CVC-ClinicDB (62 test samples), CVC-ColonDB (380 test samples), ETIS-LaribPolypDB (196 test samples), Kvasir (100 test samples).

    • downloading training dataset and move it into ./data/TrainDataset/, which can be found in this Google Drive Link (399.5MB). It contains two sub-datasets: Kvasir-SEG (900 train samples) and CVC-ClinicDB (550 train samples).

    • downloading pretrained weights and move it into snapshots/PraNet_Res2Net/PraNet-19.pth, which can be found in this Google Drive Link (124.6MB).

    • downloading Res2Net weights Google Drive (98.4MB).

  3. Training Configuration:

    • Assigning your costumed path, like --train_save and --train_path in MyTrain.py.

    • Just enjoy it!

  4. Testing Configuration:

    • After you download all the pre-trained model and testing dataset, just run MyTest.py to generate the final prediction map: replace your trained model directory (--pth_path).

    • Just enjoy it!

3.2 Evaluating your trained model:

Matlab: One-key evaluation is written in MATLAB code (Google Drive Link), please follow this the instructions in ./eval/main.m and just run it to generate the evaluation results in ./res/. The complete evaluation toolbox (including data, map, eval code, and res): Google Drive Link (380.6MB).

Python: Please refer to the work of ACMMM2021 https://github.com/plemeri/UACANet

3.3 Pre-computed maps:

They can be found in Google Drive Link (61.6MB).

4. Citation

Please cite our paper if you find the work useful:

@inproceedings{fan2020pranet,
  title={Pranet: Parallel reverse attention network for polyp segmentation},
  author={Fan, Deng-Ping and Ji, Ge-Peng and Zhou, Tao and Chen, Geng and Fu, Huazhu and Shen, Jianbing and Shao, Ling},
  booktitle={International conference on medical image computing and computer-assisted intervention},
  pages={263--273},
  year={2020},
  organization={Springer}
}

5. TODO LIST

If you want to improve the usability or any piece of advice, please feel free to contact me directly (E-mail).

  • Support NVIDIA APEX training.

  • Support different backbones ( VGGNet, ResNet, ResNeXt, iResNet, and ResNeSt etc.)

  • Support distributed training.

  • Support lightweight architecture and real-time inference, like MobileNet, SqueezeNet.

  • Add more comprehensive competitors.

6. FAQ

  1. If the image cannot be loaded in the page (mostly in the domestic network situations).

    Solution Link

7. License

The source code is free for research and education use only. Any comercial use should get formal permission first.


⬆ back to top

pranet's People

Contributors

dengpingfan avatar gewelsji avatar johnson111788 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

pranet's Issues

question about mask of Kvasir dataset

Thanks for your great work first! It is a great research!
When I check the Kvasir data, I find each mask has more than two different pixel values. I randomly choose a mask and print different pixel values, and it is a list of [0 1 2 3 4 5 6 7 248 249 250 251 253 254 255]. And then I visualize the different pixels values. The 0 represents bachground and 255 represents polyps. The left is around the polyps, i.e. in the boundary of polyps.
Could you tell me how you process this situation?
Can I change these mask into binary image by sample threshholding?
Really look forward to your reply!
Thanks again!

SFA

Excuse me, where did you find the SFA code? Thank you

How is the dice and iou calculated?

Hi,

Great project! Really appreciate. I got a question, since I am not familiar with Matlab, I am confused on the way you calculate dice and iou in eval.m. How do you choose the threshold value?

Thanks!

Res2Net weights

Hello
Thanks for the great project
Where should Res2Net weights be placed after downloading?

关于消融实验

您好,感谢您优秀的工作, 请问一下消融实验的backbone那一行的评价指标,这里的backbone是直接通过res2net编码,然后直接解码预测得到的嘛,方便的话可以提供一下您backbone的代码吗,谢谢。

训练集和测试集

你们在文章里说训练集、测试集、验证集是按比例随机分的。请问随机划分是否欠妥,因为有些数据集中,同一个息肉往往有很多张图片,随机划分的话训练集和测试集会出现同一个息肉。

Multi class attention module

Hi,
Thanks for making this code available.
Have you thought about how to expand the architecture to support more than one class? Would it be better to have one attention mask for each class? Or we can combine each label mask into one binary mask and use it for attention?

EvaluationTool

First of all thank you for your work. Then I have a question to ask you.
Are the data edges in your EvaluationTool extracted by code? I see that there is no groundtruth for the edges in the original datasets, so how do you measure the accuracy of the boundary by groundtruth ? Thank you!

Some question about Pranet

I used the Kavsir data and the matlab eval program provided by you to do several sets of comparison experiments,
and I have some question about paper's experiment part.

The following experiments use adam bs=16 lr=1e-4 and train for 20 epoches, which is same as yours

  1. resnet34 + Unet:
    meanDic:0.890;meanIoU:0.831;wFm:0.874;Sm:0.900;meanEm:0.944;MAE:0.031;maxEm:0.947;maxDice:0.893;maxIoU:0.834;meanSen:0.909;maxSen:1.000;meanSpe:0.978;maxSpe:0.982.

  2. res2net50_26w_4s + Unet:
    meanDic:0.895;meanIoU:0.842;wFm:0.888;Sm:0.908;meanEm:0.950;MAE:0.027;maxEm:0.953;maxDice:0.898;maxIoU:0.845;meanSen:0.896;maxSen:1.000;meanSpe:0.975;maxSpe:0.979.

  3. resnet34 + PraNet:
    meanDic:0.886;meanIoU:0.821;wFm:0.866;Sm:0.895;meanEm:0.940;MAE:0.031;maxEm:0.942;maxDice:0.889;maxIoU:0.823;meanSen:0.912;maxSen:1.000;meanSpe:0.969;maxSpe:0.973.

  4. res2net50_26w_4s + PraNet:
    meanDic:0.909;meanIoU:0.855;wFm:0.898;Sm:0.915;meanEm:0.956;MAE:0.025;maxEm:0.959;maxDice:0.912;maxIoU:0.858;meanSen:0.923;maxSen:1.000;meanSpe:0.980;maxSpe:0.984.

The following experiments use adam bs=16 lr=1e-4 and train for 40 epoches

  1. resnet34 + Unet:
    meanDic:0.912;meanIoU:0.855;wFm:0.900;Sm:0.916;meanEm:0.955;MAE:0.027;maxEm:0.958;maxDice:0.914;maxIoU:0.857;meanSen:0.919;maxSen:1.000;meanSpe:0.984;maxSpe:0.988.

  2. res2net50_26w_4s + Unet:
    meanDic:0.902;meanIoU:0.847;wFm:0.889;Sm:0.911;meanEm:0.955;MAE:0.025;maxEm:0.958;maxDice:0.905;maxIoU:0.849;meanSen:0.911;maxSen:1.000;meanSpe:0.982;maxSpe:0.986.

  3. resnet34 + PraNet:
    meanDic:0.900;meanIoU:0.844;wFm:0.888;Sm:0.909;meanEm:0.947;MAE:0.026;maxEm:0.950;maxDice:0.903;maxIoU:0.847;meanSen:0.916;maxSen:1.000;meanSpe:0.969;maxSpe:0.973.

  4. res2net50_26w_4s + PraNet:
    meanDic:0.908;meanIoU:0.856;wFm:0.901;Sm:0.916;meanEm:0.956;MAE:0.024;maxEm:0.959;maxDice:0.911;maxIoU:0.859;meanSen:0.909;maxSen:1.000;meanSpe:0.986;maxSpe:0.990.

My question is:
Have you compare to Unet with the same backbone as Pranet? I think the result your paper reported may be the vanilla Unet. For fair compare, I trained Unet with resnet34 and res2net50_26w_4s, the result shows Pranet with few train epoches may have 1% Dice improvement. However, when train with enough epoches, Unet-resnet34 achieve 0.912 which seems outperform Pranet? Compared to Unet, to what extent can Pranet's decoder improve performance, and is it really effective?
Thanks for your jobs and wish your reply

训练集与测试集

请问测试集里边的数据是包含了测试集中五个数据集的所有的数据吗

Activation in `BasicConv2d`

Hi

I've noticed that in a heavy-utilized BasicConv2d block you create ReLU layer during the block initialization, yet never use it in the actual forward pass. Since this is the most basic block of your network, it decreases the number of activations significantly.

In some other places, for example, in reverse attention branches of PraNet, you call the F.relu manually; yet, multiple other places (like the entirity of aggregation) are activation-less.

Is that a bug or a feature? If the latter is the case, did you researched the impact of those linear layers compared to the usual ReLU?

Sergey

Training Problem

Hello, When I run MyTrain.py, I got below errors with size mismatched.
Traceback (most recent call last):
File "MyTrain.py", line 116, in
train(train_loader, model, optimizer, epoch)
File "MyTrain.py", line 42, in train
lateral_map_5, lateral_map_4, lateral_map_3, lateral_map_2 = model(images)
File "/home/p76094266/anaconda3/envs/SINet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/p76094266/PraNet/lib/PraNet_Res2Net.py", line 145, in forward
ra5_feat = self.agg1(x4_rfb, x3_rfb, x2_rfb)
File "/home/p76094266/anaconda3/envs/SINet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/p76094266/PraNet/lib/PraNet_Res2Net.py", line 86, in forward
x3_1 = self.conv_upsample2(self.upsample(self.upsample(x1)))
RuntimeError: The size of tensor a (64) must match the size of tensor b (63) at non-singleton dimension 3

I just change it with customize dataset with size 500x500.
What should I do to execute it successfully?
Waiting for reply online.
Thanks.

关于代码

非常感谢你做出如此好的工作,冒昧的问一下你能提供对比实验的代码吗?比如unet++等?

cvc-colonDB数据集图片问题

非常感谢您的这份工作.我下载了您提供的数据集后,发现测试集/data/TestDataset/CVC-ColonDB/images/380.png图片似乎有点问题,我想问一下是原始图片就是这样的还是上传的过程中出了错?

请问网络输出的概率图是如何转换为黑白图像的?

在MyTest.py里,如下代码所示,网络输出的概率图为res,res中每个像素的范围在0到1之间,misc.imsave函数会自动对res*255。由此,输出的图像应该是一张灰度图。
截屏2022-11-25 18 17 44

可是在论文中,输出的图像是一张黑白二值图像。
截屏2022-11-25 18 23 40

请问MyTest.py里输出的灰度图,是如何转换为论文中的黑白二值图像的呢?谢谢~

README

why there exists two " Support distributed training" in 5.TODO LIST

请问:我想应用在多类别分割上,RA模块的channel相乘该如何改呢?

首先感谢您的分享!
我想把该网络应用到多类别的分割上(比如为6个类别),但是在每个RA模块中都存在这块代码

     #RA
      crop_4 = F.interpolate(ra5_feat, scale_factor=0.25, mode='bilinear')
      x = -1*(torch.sigmoid(crop_4)) + 1
      x = x.expand(-1, 2048, -1, -1).mul(x4)

我想请问如果aggregation后的channel为6,那么x channel也为6,那么这块该如何使用x.expand达到和高级特征层一样的channel然后进行相乘操作呢?
如能解答疑问不剩感激!
如有未描述清楚的请联系我: [email protected]
谢谢

What is included in the data set mentioned here? The training set has 1450 images, but it seems that there are two data set files, but the test set has 5 data sets?

What is included in the data set mentioned here? The training set has 1450 images, but it seems that there are two data set files, but the test set has 5 data sets?

下载测试数据集并将其移至./data/TestDataset/,可以在此下载链接(Google Drive)中找到。

下载训练数据集并将其移至./data/TrainDataset/,可在此下载链接 (Google Drive)中找到。

About data split (train/validation)

Hi,
I have a question about the experimental setup.


In PraNet paper,
"the images from Kvasir, and CVC-ClinicDB are randomly split into 80% for training, 10% for validation, and 10% for testing."

Contrary to what is mentioned in PraNet paper, there seems to be no validation in the implemented code.
Did you actually train without validation? Or am I missing something?

Thank you.

MyTest.py 和 main.m 运行出问题

运行MyTest,py的时候,出现以下警告:
/home/sxl/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py:3063: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode)) /home/sxl/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py:3103: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. warnings.warn("The default behavior for interpolate/upsample with float scale_factor changed " /home/sxl/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py:2952: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead. warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.") /home/sxl/anaconda3/lib/python3.6/site-packages/scipy/misc/pilutil.py:106: RuntimeWarning: overflow encountered in multiply bytedata = (data - cmin) * scale + low
然后,过一会,也不出现什么,就结束了,之后,查看results里的图片,效果特别特别差,很多都是黑乎乎的一张图片,什么都没有。
2、运行main.m,会报以下错误
(Dataset:CVC-ClinicDB; Model:PraNet) meanDic:NaN;meanIoU:NaN;wFm:NaN;Sm:NaN;meanEm:NaN;MAE:NaN;maxEm:NaN;maxDice:NaN;maxIoU:NaN;meanSen:NaN;maxSen:NaN;meanSpe:NaN;maxSpe:NaN. Elapsed time is 0.011206 seconds.
全是NAN
还望作者可以给解答下,是不是我哪里做错了?谢谢!@DengPingFan

关于修改数据集

您好,PraNet支持多分类的分割任务吗,具体要怎么修改。比如:除了背景外有两个类别?

Information about RFB_modified function

Hello, can you give more detailed information about the RFB modified used in the pranet to reduce the number of channels for each scale? What is the source of this module?

数据集问题

您好,请问您提供下载的数据集是CVC-ClinicDB数据集吗?测试集方便再提供一下吗,好像不能下载了。万分感谢!

matlab 代码提示“数组索引必须为正整数或逻辑值。”问题

你好,我想请问一下。我在运行自己的数据集后在matlab中查看结果的数据指标,运行main.m文件时会报错original_WFb中第33行Et(~GT)=Et(IDXT(~GT)),“数组索引必须为正整数或逻辑值。”这个问题该怎么办呢?我之前跑原数据集的时候并没有出现这个问题。当我将代码改为
Et(GT)=Et(IDXT(GT)); %To deal correctly with the edges of the foreground region
EA = imfilter(~Et,K);
得到的dice和iou相关值都为0.

RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED

Traceback (most recent call last):
File "/home/PraNet-master/MyTrain.py", line 116, in
train(train_loader, model, optimizer, epoch)
File "/home/PraNet-master/MyTrain.py", line 50, in train
loss.backward()
File "/home/.virtualenvs/pytor1.1/lib/python3.6/site-packages/torch/tensor.py", line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/.virtualenvs/pytor1.1/lib/python3.6/site-packages/torch/autograd/init.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED

Process finished with exit code 1

关于对比实验

您好,可以提供一下对比实验中用到了其他几个网络的预训练模型吗

How to explain "Reverse Attention" module?

Hi,
First of all, congratulations on your perfect job. I have a question about "Reverse Attention" module. Intuitively we usually use spatial attention to highlight foreground areas, but it's opposite in your work. Could you give more theoretical explanation?

Thank you,

Multi-class

Hi, amazing work!

I want to try training this on my own data,
could you guide me on how to modify the code, the format of the masks, etc. for multi-class segmentation?
I have to segment out 3 classes + background

Did you train jointly on Kvasir and CVC-ClinicDB?

I guess the results reported in Table 1 were of a single model, trained on the mixture of Kvasir and CVC-ClinicDB images? That is, you split Kvasir and CVC-ClinicDB, then mixed all the training splits into a big training dataset, trained the model, and evaluated it on each test split? Thanks.

main.m NaN

您好,我用matlab进行评估时,所有指标都是NaN,报错如下,请问可能是出现了什么问题?
(Dataset:CVC-300; Model:PraNet) meanDic:NaN;meanIoU:NaN;wFm:NaN;Sm:NaN;meanEm:NaN;MAE:NaN;maxEm:NaN;maxDice:NaN;maxIoU:NaN;meanSen:NaN;maxSen:NaN;meanSpe:NaN;maxSpe:NaN.

Testing stuck

I followed the instructions stated in the readme to download all the wights and datasets, and put them in the corrected path. However when I started the test by running MyTest.py in command line, the code stuck when executing the line model = PraNet(). Could you please tell me where is the error or could you provide me with clearer tutorial to run your code?

Packages installed in the conda env:
_libgcc_mutex 0.1 main
blas 1.0 mkl
ca-certificates 2020.10.14 0
certifi 2020.11.8 py36h06a4308_0
cffi 1.14.4 py36h261ae71_0
cudatoolkit 10.0.130 0
cudnn 7.6.5 cuda10.0_0
freetype 2.10.4 h5ab3b9f_0
intel-openmp 2020.2 254
jpeg 9b h024ee3a_2
lcms2 2.11 h396b838_0
ld_impl_linux-64 2.33.1 h53a641e_7
libedit 3.1.20191231 h14c3975_1
libffi 3.3 he6710b0_2
libgcc-ng 9.1.0 hdf63c60_0
libgfortran-ng 7.3.0 hdf63c60_0
libpng 1.6.37 hbc83047_0
libstdcxx-ng 9.1.0 hdf63c60_0
libtiff 4.1.0 h2733197_1
lz4-c 1.9.2 heb0550a_3
mkl 2020.2 256
mkl-service 2.3.0 py36he904b0f_0
mkl_fft 1.2.0 py36h23d657b_0
mkl_random 1.1.1 py36h0573a6f_0
ncurses 6.2 he6710b0_1
ninja 1.10.2 py36hff7bd54_0
numpy 1.19.2 py36h54aff64_0
numpy-base 1.19.2 py36hfa32c7d_0
olefile 0.46 py36_0
openssl 1.1.1h h7b6447c_0
pillow 8.0.1 py36he98fc37_0
pip 20.3 py36h06a4308_0
pycparser 2.20 py_2
python 3.6.12 hcff3b4d_2
pytorch 1.1.0 cuda100py36he554f03_0
readline 8.0 h7b6447c_0
scipy 1.5.2 py36h0b6359f_0
setuptools 50.3.2 py36h06a4308_2
six 1.15.0 py36h06a4308_0
sqlite 3.33.0 h62c20be_0
tk 8.6.10 hbc83047_0
torchvision 0.3.0 cuda100py36h72fc40a_0
wheel 0.36.0 pyhd3eb1b0_0
xz 5.2.5 h7b6447c_0
zlib 1.2.11 h7b6447c_3
zstd 1.4.5 h9ceee32_0

Thanks a lot!

How can I use PraNet in mutilabel segmentation

Dear author:
I have checked your code but it doesn't has a "n_class" paramenter to set the number of classes in segmentation. How can I use this code in mutilabel segmentation with a one-hot output without transforming it into a muti-two-label classification?
Best wishes!

About assessment tools

About assessment tools
Thank you very much for your excellent work.
Downloaded your Matlab evaluation tool: "EvaluationTool_New", run the main.m file directly, all the results show NaN, for example: (Dataset:CVC-300; Model:PraNet) meanDic:NaN;meanIoU:NaN;wFm:NaN ;Sm:NaN;meanEm:NaN;MAE:NaN;maxEm:NaN;maxDice:NaN;maxIoU:NaN;meanSen:NaN;maxSen:NaN;meanSpe:NaN;maxSpe:NaN.

Do you have any ideas on this issue?
Thanks for viewing this question.

complete CVC300 dataset

The share link do not contain the complete CVC300 dataset. Can you share me complete CVC300 or EndoScene?
Thank you!

About License

Thanks for the great research and code!
It is very helpful. 😆

By the way, what is the license of this code?
Because I would like to use this code in other competitions.

Results on COD datasets

Thanks for your paper.
Can you provide the results on COD datasets (CAMO, CHAMELEON, COD10K, NC4K) or the pretrained weight for COD?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.