Giter VIP home page Giter VIP logo

ege-unet's Introduction

EGE-UNet

This is the official code repository for "EGE-UNet: an Efficient Group Enhanced UNet for skin lesion segmentation", which is accpeted by 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI2023) as a regular paper!

0. Main Environments

1. Prepare the dataset.

  • The ISIC17 and ISIC18 datasets, divided into a 7:3 ratio, can be found here {Baidu or GoogleDrive}.

  • After downloading the datasets, you are supposed to put them into './data/isic17/' and './data/isic18/', and the file format reference is as follows. (take the ISIC17 dataset as an example.)

  • './data/isic17/'

    • train
      • images
        • .png
      • masks
        • .png
    • val
      • images
        • .png
      • masks
        • .png

2. Train the EGE-UNet.

cd EGE-UNet
python train.py

3. Obtain the outputs.

  • After trianing, you could obtain the outputs in './results/'

ege-unet's People

Contributors

jcruan519 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

ege-unet's Issues

IndexError: index 1 is out of bounds for axis 1 with size 1

98%|█████████▊| 45/46 [00:37<00:00, 1.21it/s]tensor([[[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]]]], device='cuda:0')
100%|██████████| 46/46 [00:38<00:00, 1.20it/s]
tensor([[[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]]]], device='cuda:0')
Traceback (most recent call last):
File "P:\CJJ\EGE-UNet-main\train.py", line 169, in
main(config)
File "P:\CJJ\EGE-UNet-main\train.py", line 154, in main
loss = test_one_epoch(
File "P:\CJJ\EGE-UNet-main\engine.py", line 147, in test_one_epoch
TN, FP, FN, TP = confusion[0,0], confusion[0,1], confusion[1,0], confusion[1,1]
IndexError: index 1 is out of bounds for axis 1 with size 1

print(confusion) is [[3014656]]
It is caused by only one category?

The mask in the released dataset is Not binary value.

Hello,
I've been working with your released available dataset (isic2017/2018) and I noticed an issue with the mask images. It appears that the mask images provided are not binary as expected for segmentation tasks. This may affect the performance of models being trained for segmentation purposes and the evaluation of those models. You can refer the code below,

import numpy as np
from PIL import Image

path="./data_isic1718/isic2017/train/masks/ISIC_0000000_segmentation.png"

print(np.unique(np.array(Image.open(path))))

output like this:
[  0   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17
  18  19  20  22  23  24  25  26  27  29  30  31  32  33  34  35  36  37
  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  55  57  59
  60  61  62  64  65  66  67  68  69  70  71  72  73  74  75  76  77  78
  79  80  81  82  83  84  85  86  87  88  89  90  91  93  94  95  96  97
  98 100 101 102 103 104 105 106 107 108 109 110 112 113 115 116 117 118
 119 120 121 122 123 124 126 128 129 130 131 134 135 136 137 138 139 141
 142 143 144 145 146 147 148 149 150 151 152 153 154 155 157 158 159 160
 161 162 165 166 167 171 172 173 174 175 176 177 178 179 181 182 183 184
 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203
 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221
 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239
 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255]
 

About FPS

Could author provide some experiment results of speed test in some GPU, likes 1080TI, 3080. Thanks

参数数据不对

论文里面写的是0.053,但是实际上是0.0458,我觉得应该不是我复现的问题吧,因为我是直接用的你开源的模型代码

ValueError: height and width must be > 0

Why does the testing phase after training end in errors?

#----------Testing----------#
0%| | 0/650 [00:00<?, ?it/s]
......
ValueError: height and width must be > 0

gt_ds设置为False报错

非常感谢您提供的简洁清晰的代码,我在调试您的代码的时候发现如果将gt_ds设置为False,程序将会报错。请问该如何修改?

模型转换和优化的错误

针对模型直接用torch.onnx.export 转换为onnx,这时没有提示错误,当采用onnxsim 进行优化时,提示gmhpa/Resize_2_output_0 这个错误,指向了Grouped_multi_axis_Hadamard_Product_Attention 中的F.interpolate( params_zy),提示为这里没有定义;针对原始的onnx转换为mnn模型也没有提示错误,但是执行mnn时也提示了如onnxsim 同样的错误,指向了params_zy 的 resize 操作结果没有定义,不知道这个是什么原因。

数据结果不对

Hello, my test results on Unet are different from the original paper. My result on Unet is 86.73.
Thank you!

About Dataset

I would like to ask, how is the dataset processed?
Is it true that the official data's training set and validation set are merged and then divided into an experimental training set and test set with a 7:3 ratio?

Input size

Thanks for your contribution. I just have a question about the input size. I want to train with a larger input size (512 x 512),, but will it impact performance a lot? or it still adaptive .

训练耗时

为什么我训练一个epoch要一两个小时,batch_size为512,训练集大概一万张,测试集2000张,而且我改了gpu_id也不可以用多卡训练?请问是我那个参数设置的问题吗?

Trainning in other datasets and deploy

Thanks for your good reasearch and I trained the network in sky datasets.
compared with u2netp, the model size has been reduced by 10 times, and the inference speed has increased by more than 14 times although the model has a slight decrease in accuracy.

20230813-121612
20230813-121619

对比实验数据偏差大

想问一下作者大大,所有的对比实验都是自己复现的代码还是开源代码?论文中U-Net训练ISIC2017数据集DSC能够达到86.99。而我使用UNet-pytorch只能达到83.976。这个结果差距会不会太大了一点?
1
2
另外EGE-UNet在ISIC2017数据集的复现结果也略微偏低,请问这是我显卡的问题嘛?
3

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.