Giter VIP home page Giter VIP logo

Comments (10)

yswang1717 avatar yswang1717 commented on June 6, 2024

Thanks if you tell me the information separately for MagNet and Fast-MagNet.
In addition, I also want to know the required memory to take 1 batch on the GPU.

from magnet.

hmchuong avatar hmchuong commented on June 6, 2024

To train the backbone, you are suggested to run on big GPUs (from previous works). I used V100 32GB.

To train the refinement module, I used 2080Ti 11GB. However, you can reduce the batch size to run on GPUs with 8GB. I don't think there are too many differences in refinement's performance when reducing the batch size.

from magnet.

yswang1717 avatar yswang1717 commented on June 6, 2024

Thanks for your quick response.
I'm running and testing your codes and have some questions.

-How can I reproduce your paper results on DeepGlobe dataset?
In the paper, Tables 8 shows results of MagNet-fast:71.85, MagNet:72.96
However, in my environment, the results do not match to them.

running MagNet-Fast: sh scripts/deepglobe/test_magnet_fast.sh
output : coarse iou: 67.23, refinement iou: 68.22

running MagNet: sh scripts/deepglobe/test_magnet.sh
output : coarse iou: 67.23, refinement iou: 72.10

refinement ious reproduced from your code are lower than the paper results.
(MagNet-Fast:71.85->68.22, MagNet:72.96->72.10)

I used the same .pth file downloaded from your github code following your readme instruction.
--pretrained checkpoints/deepglobe_fpn.pth
--pretrained_refinement checkpoints/deepglobe_refinement.pth \

from magnet.

hmchuong avatar hmchuong commented on June 6, 2024

Hi,
The results in Table 8 are performed with the multi-scale/ flipping setting adopting from the GLNet paper for a fair comparison. Unfortunately, that testing script is not included in the current code.

from magnet.

yswang1717 avatar yswang1717 commented on June 6, 2024

Hello!

  1. If possible could you please mail me the test code? ([email protected] )
    I really want to reproduce your paper results.

  2. Or can you just write the ratio for multi-scales and related parameters?

  3. In addition, I cannot find the multi-scale option in GLNet paper github below. Where is it?
    https://github.com/VITA-Group/GLNet/blob/7b7bdee196e368a1f3a32c54b984915f8e397275/helper.py#L390

from magnet.

hmchuong avatar hmchuong commented on June 6, 2024

Hi,

Sorry, it's not multi-scale, it's flipping/rotating testing. You can check the code here https://github.com/VITA-Group/GLNet/blob/7b7bdee196e368a1f3a32c54b984915f8e397275/helper.py#L434

from magnet.

yswang1717 avatar yswang1717 commented on June 6, 2024

To reproduce Table 8 "Segmentation results on the DeepGlobe dataset.", which �input size did you use for "patch processing" and "down sampling"? (U-Net, FCN, SegNet, ... ). The following two sentence in the paper contradict for each other.

  1. "We also used the same input size 508×508 as GLNet."
  2. MagNet, and 64 patches of the patch processing approach.

Downsampling does not matter, but it is not clear on patch processing experimental settings.
508x508 is not divided by 2448x2448 and did you produce 64 306x306 patches of 2448x2448 images and upscaled them to 508x508?

from magnet.

hmchuong avatar hmchuong commented on June 6, 2024

Hi, there is an overlapping between patches. You can use the code I provide to get 64 patches

from magnet.

yswang1717 avatar yswang1717 commented on June 6, 2024

Hello,

  1. I think the backbone network is not trained with torch.no_grad() option in the following codes. Can I train the backbone network by removing the with torch.no_grad() option for both coarse_pred and fine_pred? Should I have to remove torch.no_grad() in aggregate features additionally?

with torch.no_grad():
coarse_pred = model(coarse_image).softmax(1)
fine_pred = model(fine_image).softmax(1)

  1. Should I have to freeze the refinement module while training backbone network?
    Get early predictions

  2. running codes with removing torch.no_grad() option requires 13GB and 19GB (coarse and fine, same training protocol except hrnet18+ocr on deepglobe dataset) for only single batch size. it is correct to being required 32GB GPU memory for training backbone network? (on deepglobe dataset, 508^2 input size, same scale, hrnet18+ocr backbone)

from magnet.

hmchuong avatar hmchuong commented on June 6, 2024

Hi,

There are two separate sections: one for backbone training, one for refinement module training. Please check that, currently the two modules can not be joint trained.

from magnet.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.