Giter VIP home page Giter VIP logo

inf-net's Introduction

Inf-Net: Automatic COVID-19 Lung Infection Segmentation from CT Images

Authors: Deng-Ping Fan, Tao Zhou, Ge-Peng Ji, Yi Zhou, Geng Chen, Huazhu Fu, Jianbing Shen, and Ling Shao.

0. Preface

  • This repository provides code for "Inf-Net: Automatic COVID-19 Lung Infection Segmentation from CT Images" TMI-2020. (arXiv Pre-print & medrXiv & 中译版)

  • If you have any questions about our paper, feel free to contact us. And if you are using COVID-SemiSeg Dataset, Inf-Net or evaluation toolbox for your research, please cite this paper (BibTeX).

  • We elaborately collect COVID-19 imaging-based AI research papers and datasets awesome-list.

0.1. 🔥 NEWS 🔥

  • [2022/04/08] 💥 We release a new large-scale dataset on Video Polyp Segmentation (VPS) task, please enjoy it. ProjectLink/ PDF.
  • [2021/04/15] Update the results on multi-class segmentation task, including 'Semi-Inf-Net & FCN8s' and 'Semi-Inf-Net & MC'. (Download link: Google Drive)
  • [2020/10/25] Uploading 中文翻译版.
  • [2020/10/14] Updating the legend (1 * 1 -> 3 * 3; 3 * 3 -> 1 * 1) of Fig.3 in our manuscript. 2020/08/15 Updating equation (2) in our manuscript.
    R_i = C(f_i, Dow(e_att)) * A_i -> R_i = C(f_i * A_i, Dow(e_{att}));
  • [2020/08/15] Optimizing the testing code, now you can test the custom data without gt_path
  • [2020/05/15] Our paper is accepted for publication in IEEE TMI
  • [2020/05/13] 💥 Upload pre-trained weights. (Uploaded by Ge-Peng Ji)
  • [2020/05/12] 💥 Release training/testing/evaluation code. (Updated by Ge-Peng Ji)
  • [2020/05/01] Create repository.

0.2. Table of Contents

Table of contents generated with markdown-toc

1. Introduction

1.1. Task Descriptions


Figure 1. Example of COVID-19 infected regions in CT axial slice, where the red and green masks denote the ground-glass opacity (GGO) and consolidation, respectively. The images are collected from [1].

[1] COVID-19 CT segmentation dataset, link: https://medicalsegmentation.com/covid19/, accessed: 2020-04-11.

2. Proposed Methods

  • Preview:

    Our proposed methods consist of three individual components under three different settings:

    • Inf-Net (Supervised learning with segmentation).

    • Semi-Inf-Net (Semi-supervised learning with doctor label and pseudo label)

    • Semi-Inf-Net + Multi-Class UNet (Extended to Multi-class Segmentation, including Background, Ground-glass Opacities, and Consolidation).

  • Dataset Preparation:

    Firstly, you should download the testing/training set (Google Drive Link) and put it into ./Dataset/ repository.

  • Download the Pretrained Model:

    ImageNet Pre-trained Models used in our paper ( VGGNet16, ResNet, and Res2Net), and put them into ./Snapshots/pre_trained/ repository.

  • Configuring your environment (Prerequisites):

    Note that Inf-Net series is only tested on Ubuntu OS 16.04 with the following environments (CUDA-10.0). It may work on other operating systems as well but we do not guarantee that it will.

    • Creating a virtual environment in terminal: conda create -n SINet python=3.6.

    • Installing necessary packages: pip install -r requirements.txt.

    • Installing THOP for counting the FLOPs/Params of model.

2.1. Inf-Net

2.1.1 Overview


Figure 2. The architecture of our proposed Inf-Net model, which consists of three reverse attention (RA) modules connected to the paralleled partial decoder (PPD).

2.1.2. Usage

  1. Train

    • We provide multiple backbone versions (see this line) in the training phase, i.e., ResNet, Res2Net, and VGGNet, but we only provide the Res2Net version in the Semi-Inf-Net. Also, you can try other backbones you prefer to, but the pseudo labels should be RE-GENERATED with corresponding backbone.

    • Turn off the semi-supervised mode (--is_semi=False) turn off the flag of whether use pseudo labels (--is_pseudo=False) in the parser of MyTrain_LungInf.py and just run it! (see this line)

  2. Test

    • When training is completed, the weights will be saved in ./Snapshots/save_weights/Inf-Net/. You can also directly download the pre-trained weights from Google Drive.

    • Assign the path --pth_path of trained weights and --save_path of results save and in MyTest_LungInf.py.

    • Just run it and results will be saved in ./Results/Lung infection segmentation/Inf-Net

2.2. Semi-Inf-Net

2.2.1. Overview


Figure 3. Overview of the proposed Semi-supervised Inf-Net framework.

2.2.2. Usage

  1. Data Preparation for a pseudo-label generation. (Optional)

    • Dividing the 1600 unlabeled image into 320 groups (1600/K groups, we set K=5 in our implementation), in which images with *.jpg format can be found in ./Dataset/TrainingSet/LungInfection-Train/Pseudo-label/Imgs/. (I suppose you have downloaded all the train/test images following the instructions above) Then you only just run the code stored in ./SrcCode/utils/split_1600.py to split it into multiple sub-dataset, which are used in the training process of pseudo-label generation. The 1600/K sub-datasets will be saved in ./Dataset/TrainingSet/LungInfection-Train/Pseudo-label/DataPrepare/Imgs_split/

    • You can also skip this process and download them from Google Drive that is used in our implementation.

  2. Generating Pseudo Labels (Optional)

    • After preparing all the data, just run PseudoGenerator.py. It may take at least day and a half to finish the whole generation.

    • You can also skip this process and download intermediate generated file from Google Drive that is used in our implementation.

    • When training is completed, the images with pseudo labels will be saved in ./Dataset/TrainingSet/LungInfection-Train/Pseudo-label/.

  3. Train

    • Firstly, turn off the semi-supervised mode (--is_semi=False) and turn on the flag of whether using pseudo labels (--is_pseudo=True) in the parser of MyTrain_LungInf.py and modify the path of training data to the pseudo-label repository (--train_path='Dataset/TrainingSet/LungInfection-Train/Pseudo-label'). Just run it!

    • When training is completed, the weights (trained on pseudo-label) will be saved in ./Snapshots/save_weights/Inf-Net_Pseduo/Inf-Net_pseudo_100.pth. Also, you can directly download the pre-trained weights from Google Drive. Now we have prepared the weights that is pre-trained on 1600 images with pseudo labels. Please note that these valuable images/labels can promote the performance and the stability of the training process, because of ImageNet pre-trained models are just designed for general object classification/detection/segmentation tasks initially.

    • Secondly, turn on the semi-supervised mode (--is_semi=True) and turn off the flag of whether using pseudo labels (--is_pseudo=False) in the parser of MyTrain_LungInf.py and modify the path of training data to the doctor-label (50 images) repository (--train_path='Dataset/TrainingSet/LungInfection-Train/Doctor-label'). Just run it.

  4. Test

    • When training is completed, the weights will be saved in ./Snapshots/save_weights/Semi-Inf-Net/. You also can directly download the pre-trained weights from Google Drive.

    • Assign the path --pth_path of trained weights and --save_path of results save and in MyTest_LungInf.py.

    • Just run it! And results will be saved in ./Results/Lung infection segmentation/Semi-Inf-Net.

2.3. Semi-Inf-Net + Multi-class UNet

2.3.1. Overview

Here, we provide a general and simple framework to address the multi-class segmentation problem. We modify the original design of UNet that is used for binary segmentation, and thus, we name it as Multi-class UNet. More details can be found in our paper.


Figure 3. Overview of the proposed Semi-supervised Inf-Net framework.

2.3.2. Usage

  1. Train

    • Just run MyTrain_MulClsLungInf_UNet.py

    • Note that ./Dataset/TrainingSet/MultiClassInfection-Train/Prior is just borrowed from ./Dataset/TestingSet/LungInfection-Test/GT/, and thus, two repositories are equally.

  2. Test

    • When training is completed, the weights will be saved in ./Snapshots/save_weights/Semi-Inf-Net_UNet/. Also, you can directly download the pre-trained weights from Google Drive.

    • Assigning the path of weights in parameters snapshot_dir and run MyTest_MulClsLungInf_UNet.py. All the predictions will be saved in ./Results/Multi-class lung infection segmentation/Consolidation and ./Results/Multi-class lung infection segmentation/Ground-glass opacities.

3. Evaluation Toolbox

3.1. Introduction

We provide a one-key evaluation toolbox for LungInfection Segmentation tasks, including Lung-Infection and Multi-Class-Infection. Please download the evaluation toolbox Google Drive.

3.2. Usage

  • Prerequisites: MATLAB Software (Windows/Linux OS is both works, however, we suggest you test it in the Linux OS for convenience.)

  • run cd ./Evaluation/ and matlab open the Matlab software via terminal

  • Just run main.m to get the overall evaluation results.

  • Edit the parameters in the main.m to evaluate your custom methods. Please refer to the instructions in the main.m.

4. COVID-SemiSeg Dataset

We also build a semi-supervised COVID-19 infection segmentation (COVID-SemiSeg) dataset, with 100 labelled CT scans from the COVID-19 CT Segmentation dataset [1] and 1600 unlabeled images from the COVID-19 CT Collection dataset [2]. Our COVID-SemiSeg Dataset can be downloaded at Google Drive.

[1]“COVID-19 CT segmentation dataset,” https://medicalsegmentation.com/covid19/, accessed: 2020-04-11. [2]J. P. Cohen, P. Morrison, and L. Dao, “COVID-19 image data collection,” arXiv, 2020.

3.1. Training set

  1. Lung infection which consists of 50 labels by doctors (Doctor-label) and 1600 pseudo labels generated (Pseudo-label) by our Semi-Inf-Net model. Download Link.

  2. Multi-Class lung infection which also composed of 50 multi-class labels (GT) by doctors and 50 lung infection labels (Prior) generated by our Semi-Inf-Net model. Download Link.

3.2. Testing set

  1. The Lung infection segmentation set contains 48 images associated with 48 GT. Download Link.

  2. The Multi-Class lung infection segmentation set has 48 images and 48 GT. Download Link.

  3. The download link (Google Drive) of our 638-dataset, which is used in Table.V of our paper.

== Note that ==: In our manuscript, we said that the total testing images are 50. However, we found there are two images with very small resolution and black ground-truth. Thus, we discard these two images in our testing set. The above link only contains 48 testing images.

4. Results

To compare the infection regions segmentation performance, we consider the two state-of-the-art models U-Net and U-Net++. We also show the multi-class infection labeling results in Fig. 5. As can be observed, our model, Semi-Inf-Net & FCN8s, consistently performs the best among all methods. It is worth noting that both GGO and consolidation infections are accurately segmented by Semi-Inf-Net & FCN8s, which further demonstrates the advantage of our model. In contrast, the baseline methods, DeepLabV3+ with different strides and FCNs, all obtain unsatisfactory results, where neither GGO nor consolidation infections can be accurately segmented.

4.1. Download link:

Lung infection segmentation results can be downloaded from this link

Multi-class lung infection segmentation can be downloaded from this link

5. Visualization Results:


Figure 4. Visual comparison of lung infection segmentation results.


Figure 5. Visual comparison of multi-class lung infection segmentation results, where the red and green labels indicate the GGO and consolidation, respectively.

6. Paper list of COVID-19 related (Update continue)

Ori GitHub Link: https://github.com/HzFu/COVID19_imaging_AI_paper_list


Figure 6. This is a collection of COVID-19 imaging-based AI research papers and datasets.

7. Manuscript

https://arxiv.org/pdf/2004.14133.pdf

8. Citation

Please cite our paper if you find the work useful:

@article{fan2020infnet,
  author={Fan, Deng-Ping and Zhou, Tao and Ji, Ge-Peng and Zhou, Yi and Chen, Geng and Fu, Huazhu and Shen, Jianbing and Shao, Ling},
  journal={IEEE Transactions on Medical Imaging}, 
  title={Inf-Net: Automatic COVID-19 Lung Infection Segmentation From CT Images}, 
  year={2020},
  volume={39},
  number={8},
  pages={2626-2637},
  doi={10.1109/TMI.2020.2996645}
}

9. LICENSE

  • The COVID-SemiSeg Dataset is made available for non-commercial purposes only. Any comercial use should get formal permission first.

  • You will not, directly or indirectly, reproduce, use, or convey the COVID-SemiSeg Dataset or any Content, or any work product or data derived therefrom, for commercial purposes.

10. Acknowledgements

We would like to thank the whole organizing committee for considering the publication of our paper in this special issue (Special Issue on Imaging-Based Diagnosis of COVID-19) of IEEE Transactions on Medical Imaging. For more papers refer to Link.

11. TODO LIST

If you want to improve the usability of code or any other pieces of advice, please feel free to contact me directly (E-mail).

  • Support NVIDIA APEX training.

  • Support different backbones ( VGGNet (done), ResNet, ResNeXt Res2Net (done), iResNet, and ResNeSt etc.)

  • Support distributed training.

  • Support lightweight architecture and faster inference, like MobileNet, SqueezeNet.

  • Support distributed training

  • Add more comprehensive competitors.

12. FAQ

  1. If the image cannot be loaded on the page (mostly in domestic network situations).

    Solution Link

  2. I tested the U-Net, however, the Dice score is different from the score in TABLE II (Page 8 of our manuscript).
    Note that, our Dice score is the mean dice score rather than the max Dice score. You can use our evaluation toolbox Google Drive. The training set of each compared model (e.g., U-Net, Attention-UNet, Gated-UNet, Dense-UNet, U-Net++, Inf-Net (ours)) is 48 images rather than 48 images + 1600 images.

⬆ back to top

inf-net's People

Contributors

dengpingfan avatar gewelsji avatar pooyanrezaeipour avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

inf-net's Issues

download data

Thank you for your sharing.
I can't download the data that you uploaded in GoogleDrive due to can't connect google.
Could you give an another download link such as BaiDuYunPan of the data sets and pretrained weightes?
Thanks again.

Semi-Inf-Net + Multi-class UNet 提问

作者您好,我使用了第三个(Semi-Inf-Net + Multi-class UNet),训练完模型后使用了48张图片的测试集,但为什么class_12文件中的结果为(黄色为背景,绿色为团结,深红色为GGO),而且Consolidation与Ground-glass opacities文件中的图片为纯黑。
最后,我不太懂为什么测试代码中也有三个数据路径。那如何使用自己的数据进行预测呢?我只有肺窗的原图和切割好的肺实质,没有感染部分的标注。
期待您的答复,谢谢

Pre trained data

Dear Deng Ping Fan,

I'm really interested in the model, however I cannot find the pre-trained weights..

Could you kindly check the links in the intructions?

Best

Giulio

Why the relu is not used in the BasicConv2d?

File: InfNet_VGGNet.py

class BasicConv2d(nn.Module):
    def __init__(self, in_planes, out_planes, kernel_size, stride=1, padding=0, dilation=1):
        super(BasicConv2d, self).__init__()
        self.conv = nn.Conv2d(in_planes, out_planes,
                              kernel_size=kernel_size, stride=stride,
                              padding=padding, dilation=dilation, bias=False)
        self.bn = nn.BatchNorm2d(out_planes)
        self.relu = nn.ReLU(inplace=True)

    def forward(self, x):
        x = self.conv(x)
        x = self.bn(x)
        return x

Why the ReLU is not used?

generating pseudo-label images

When generating pseudo-label images, a problem arises.

PseudoGenerator.py line 149, in inference_module
image, gt, name = test_loader.load_data()
ValueError: not enough values to unpack (expected 3, got 2)

I noticed that line 143 in the code says FIXME, so can the author tell me the specific implementation details

python evaluate toolbox

Hi, very excellent work. Thanks for the code, it's very helpful and instructive! Howerve, i have a little request, could you provide a python evaluate toolbox. I dont install the matlab.
Best!

problems with configuring environment

When i execute code: pip install -r requirements.txt, it occurs the problem:
ERROR: Could not find a version that satisfies the requirement channelnorm-cuda==0.0.0 (from -r requirements.txt (line 3)) (from versions: none)
ERROR: No matching distribution found for channelnorm-cuda==0.0.0 (from -r requirements.txt (line 3))

How can i install this package? Thank you for your replying.

problem of LungInfection-Test datasets

I have download the LungInfection datasets from link in the Readme. LungInfection-Test contains 48 pictures and your paper mentioned is 50. The same problem occur in the MultiClassInfection-Test.

Ask about the reverse attention module

The author writes in the article that A (high-level feature maps) first concatenate B (edge-attention) and then multiply C (expand) in every reverse attention module, but the code shows that A first multiply C and then concatenate B. Which one is the author trying to express?

About the generation of picture edges

Hello author, in the data set file you gave, there is an edge file that is not available in the official website. I would like to ask how these images with image boundary information are generated, thank you

Semi-Inf-Net

作者您好,我在使用第一个网络(inf-Net)测试时和第二个网络(Semi-Inf-Net)训练标签集时,均会遇到以下错误。
[Errno 2] No such file or directory: '/media/nercms/NERCMS/GepengJi/COVID-19/Code/PraNetPlusPlus/snapshots/res2net50_v1b_26w_4s-3cf99910.pth'
我只知道这个路径来自于Res2Net.py文件中,却不知道如何解决。期待您的回复,非常感谢!

About the dataloader

line
if img.size == gt.size

This condition filters out Pseudo labels. No pseudo labels can be read when generating pseudo labels or training with pseudo labels. Do I need to remove this condition?

About the Muti-class U-Net

Hi!
You have done a splendid work, and I have a question about the loss and label about Muti-class U-Net. Why don't you just use the CrossEntropyLoss instead of the BCELoss and one-hot label?
It would be very kind of you to answer my question.
Thanks.

About image edge

Hi, very excellent work. Thanks for the code, it's very helpful and instructive!I want to add a new data set into it. Can you tell me how you get the image edge?

About preprocess

Hello, thank you for your code. I still want to know what preprocessing you have done to the data? Because when running with my data, the result is not ideal. I would like to know if this is because of the difference in preprocessing or the data is too different?

about the code of visualization

Hello, author. Thank you for providing such a good job. I have a question about the visualization of features using Grad-CAM in your paper, which I am very interested in, and I would like to ask if you can make this part of the code public。my e-mail is [email protected], thank you very much!

About the onehot.py

onehot.py

def onehot(data, n):
"""onehot ecoder"""
buf = np.zeros(data.shape + (n,))
nmsk = np.arange(data.size) * n + data.ravel()
buf.ravel()[nmsk - 1] = 1
return buf

When I run

label = np.asarray([[0,1,1], [0,0,2]])
print(label)
print("-----------")
label_onehot = onehot(label,3)
print(label_onehot.transpose(2, 0, 1))

and get the following results:

[[0 1 1]
[0 0 2]]
-----------
[[[0 1 1]
[0 0 0]]

[[0 0 0]
[0 0 1]]

[[0 0 1]
[1 0 1]]]

This is not the correct result. The correct result should be as follows:

[[0 1 1]
[0 0 2]]
-----------
[[[1 0 0]
[1 1 0]]

[[0 1 1]
[0 0 0]]

[[0 0 0]
[0 0 1]]]

Is there an error in the code itself, or is there something wrong with me?

Intensity normalization question

Hello,

I have noticed in your code you use ImageNet mean-std normalization, which is for images in the 0-1 range.

Therefore, do you use min-max normalization to put Hounsfield Unit input images (original CT values) into the 0-1 range? Did you do that per volume? I haven't found details about that in the paper.

I want to make sure I am reproducing your work correctly, since the network performance can be affected greatly by wrong normalization.

Thank you

A Question about Fig.3 in the paper

您好,源码中Parallel Partial Decoder部分的实现,只有最后卷积核的大小是11,其他的都是33,按照我的理解,这和论文图3中两种卷积核的标注似乎是相反的,如果我的理解有错误,恳请纠正
image
image

Error while executing MyTest_Lunginf.py

Hi,
Thank you for this helpful compilation.
I am able to generate my weights after training MyTrain_LungInf.py with backbone = 'Res2Net50'
However when I am trying to run the MyTest_LungInf.py I am facing the below issue although I have placed the pre-trained weights of 'Inf-Net_Pseduo' in the reuired path. Is it because of some mis-match between the trained 'Semi-Inf-Net-100.pth' and the pre-trained 'Inf-Net_Pseduo'? How could we fix this?

Traceback (most recent call last):
File "MyTest_LungInf.py", line 65, in
inference()
File "MyTest_LungInf.py", line 39, in inference
model.load_state_dict(torch.load(opt.pth_path))
File "/home/user/Virtualenv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 839, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Inf_Net:
Unexpected key(s) in state_dict: "total_ops", "total_params", "resnet.total_ops", "resnet.total_params", "resnet.conv1.total_ops", "resnet.conv1.total_params", "resnet.layer1.total_ops", "resnet.layer1.total_params", "resnet.layer1.0.total_ops", "resnet.layer1.0.total_params", "resnet.layer1.0.convs.total_ops", "resnet.layer1.0.convs.total_params", "resnet.layer1.0.bns.total_ops", "resnet.layer1.0.bns.total_params", "resnet.layer1.0.downsample.total_ops", "resnet.layer1.0.downsample.total_params", "resnet.layer1.1.total_ops", "resnet.layer1.1.total_params", "resnet.layer1.1.convs.total_ops", "resnet.layer1.1.convs.total_params", "resnet.layer1.1.bns.total_ops", "resnet.layer1.1.bns.total_params", "resnet.layer1.2.total_ops", "resnet.layer1.2.total_params", "resnet.layer1.2.convs.total_ops", "resnet.layer1.2.convs.total_params", "resnet.layer1.2.bns.total_ops", "resnet.layer1.2.bns.total_params", "resnet.layer2.total_ops", "resnet.layer2.total_params", "resnet.layer2.0.total_ops", "resnet.layer2.0.total_params", "resnet.layer2.0.convs.total_ops", "resnet.layer2.0.convs.total_params", "resnet.layer2.0.bns.total_ops", "resnet.layer2.0.bns.total_params", "resnet.layer2.0.downsample.total_ops", "resnet.layer2.0.downsample.total_params", "resnet.layer2.1.total_ops", "resnet.layer2.1.total_params", "resnet.layer2.1.convs.total_ops", "resnet.layer2.1.convs.total_params", "resnet.layer2.1.bns.total_ops", "resnet.layer2.1.bns.total_params", "resnet.layer2.2.total_ops", "resnet.layer2.2.total_params", "resnet.layer2.2.convs.total_ops", "resnet.layer2.2.convs.total_params", "resnet.layer2.2.bns.total_ops", "resnet.layer2.2.bns.total_params", "resnet.layer2.3.total_ops", "resnet.layer2.3.total_params", "resnet.layer2.3.convs.total_ops", "resnet.layer2.3.convs.total_params", "resnet.layer2.3.bns.total_ops", "resnet.layer2.3.bns.total_params", "resnet.layer3.total_ops", "resnet.layer3.total_params", "resnet.layer3.0.total_ops", "resnet.layer3.0.total_params", "resnet.layer3.0.convs.total_ops", "resnet.layer3.0.convs.total_params", "resnet.layer3.0.bns.total_ops", "resnet.layer3.0.bns.total_params", "resnet.layer3.0.downsample.total_ops", "resnet.layer3.0.downsample.total_params", "resnet.layer3.1.total_ops", "resnet.layer3.1.total_params", "resnet.layer3.1.convs.total_ops", "resnet.layer3.1.convs.total_params", "resnet.layer3.1.bns.total_ops", "resnet.layer3.1.bns.total_params", "resnet.layer3.2.total_ops", "resnet.layer3.2.total_params", "resnet.layer3.2.convs.total_ops", "resnet.layer3.2.convs.total_params", "resnet.layer3.2.bns.total_ops", "resnet.layer3.2.bns.total_params", "resnet.layer3.3.total_ops", "resnet.layer3.3.total_params", "resnet.layer3.3.convs.total_ops", "resnet.layer3.3.convs.total_params", "resnet.layer3.3.bns.total_ops", "resnet.layer3.3.bns.total_params", "resnet.layer3.4.total_ops", "resnet.layer3.4.total_params", "resnet.layer3.4.convs.total_ops", "resnet.layer3.4.convs.total_params", "resnet.layer3.4.bns.total_ops", "resnet.layer3.4.bns.total_params", "resnet.layer3.5.total_ops", "resnet.layer3.5.total_params", "resnet.layer3.5.convs.total_ops", "resnet.layer3.5.convs.total_params", "resnet.layer3.5.bns.total_ops", "resnet.layer3.5.bns.total_params", "resnet.layer4.total_ops", "resnet.layer4.total_params", "resnet.layer4.0.total_ops", "resnet.layer4.0.total_params", "resnet.layer4.0.convs.total_ops", "resnet.layer4.0.convs.total_params", "resnet.layer4.0.bns.total_ops", "resnet.layer4.0.bns.total_params", "resnet.layer4.0.downsample.total_ops", "resnet.layer4.0.downsample.total_params", "resnet.layer4.1.total_ops", "resnet.layer4.1.total_params"

PraNetPlusPlus

Hi. Thanks for your sharing.
When I run "InfNet_VGGNet.py", I get this error:

"NameError: name 'PraNetPlusPlus' is not defined."

when I search into the script, there is just one line that 'PraNetPlusPlus' exists:

ras = PraNetPlusPlus().cuda()

Could you help me that how I can fix this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.