Comments (10)
Thanks if you tell me the information separately for MagNet and Fast-MagNet.
In addition, I also want to know the required memory to take 1 batch on the GPU.
from magnet.
To train the backbone, you are suggested to run on big GPUs (from previous works). I used V100 32GB.
To train the refinement module, I used 2080Ti 11GB. However, you can reduce the batch size to run on GPUs with 8GB. I don't think there are too many differences in refinement's performance when reducing the batch size.
from magnet.
Thanks for your quick response.
I'm running and testing your codes and have some questions.
-How can I reproduce your paper results on DeepGlobe dataset?
In the paper, Tables 8 shows results of MagNet-fast:71.85, MagNet:72.96
However, in my environment, the results do not match to them.
running MagNet-Fast: sh scripts/deepglobe/test_magnet_fast.sh
output : coarse iou: 67.23, refinement iou: 68.22
running MagNet: sh scripts/deepglobe/test_magnet.sh
output : coarse iou: 67.23, refinement iou: 72.10
refinement ious reproduced from your code are lower than the paper results.
(MagNet-Fast:71.85->68.22, MagNet:72.96->72.10)
I used the same .pth file downloaded from your github code following your readme instruction.
--pretrained checkpoints/deepglobe_fpn.pth
--pretrained_refinement checkpoints/deepglobe_refinement.pth \
from magnet.
Hi,
The results in Table 8 are performed with the multi-scale/ flipping setting adopting from the GLNet paper for a fair comparison. Unfortunately, that testing script is not included in the current code.
from magnet.
Hello!
-
If possible could you please mail me the test code? ([email protected] )
I really want to reproduce your paper results. -
Or can you just write the ratio for multi-scales and related parameters?
-
In addition, I cannot find the multi-scale option in GLNet paper github below. Where is it?
https://github.com/VITA-Group/GLNet/blob/7b7bdee196e368a1f3a32c54b984915f8e397275/helper.py#L390
from magnet.
Hi,
Sorry, it's not multi-scale, it's flipping/rotating testing. You can check the code here https://github.com/VITA-Group/GLNet/blob/7b7bdee196e368a1f3a32c54b984915f8e397275/helper.py#L434
from magnet.
To reproduce Table 8 "Segmentation results on the DeepGlobe dataset.", which �input size did you use for "patch processing" and "down sampling"? (U-Net, FCN, SegNet, ... ). The following two sentence in the paper contradict for each other.
- "We also used the same input size 508×508 as GLNet."
- MagNet, and 64 patches of the patch processing approach.
Downsampling does not matter, but it is not clear on patch processing experimental settings.
508x508 is not divided by 2448x2448 and did you produce 64 306x306 patches of 2448x2448 images and upscaled them to 508x508?
from magnet.
Hi, there is an overlapping between patches. You can use the code I provide to get 64 patches
from magnet.
Hello,
- I think the backbone network is not trained with torch.no_grad() option in the following codes. Can I train the backbone network by removing the with torch.no_grad() option for both coarse_pred and fine_pred? Should I have to remove torch.no_grad() in aggregate features additionally?
with torch.no_grad():
coarse_pred = model(coarse_image).softmax(1)
fine_pred = model(fine_image).softmax(1)
-
Should I have to freeze the refinement module while training backbone network?
Get early predictions -
running codes with removing torch.no_grad() option requires 13GB and 19GB (coarse and fine, same training protocol except hrnet18+ocr on deepglobe dataset) for only single batch size. it is correct to being required 32GB GPU memory for training backbone network? (on deepglobe dataset, 508^2 input size, same scale, hrnet18+ocr backbone)
from magnet.
Hi,
There are two separate sections: one for backbone training, one for refinement module training. Please check that, currently the two modules can not be joint trained.
from magnet.
Related Issues (20)
- prepare_cityscapes.sh
- Some problems about Gleason dataset HOT 3
- How to train without using pretrained weight weights? HOT 4
- some details about the results of experiment
- Training details on methods in Table 4 HOT 2
- How to apply train.py trained parameters to test.py? HOT 1
- Some questions about the training process HOT 4
- Patches and refined locations HOT 9
- the epoch_IoU of retrained refinement network can only up to 0.35 on deepglobe dataset HOT 4
- RuntimeError: shape '[1, 1, -1, 508, 508]' is invalid for input of size 16451136 HOT 1
- input to the model HOT 1
- RuntimeError: shape '[1, 1, -1, 508, 508]' is invalid for input of size 16451136 HOT 2
- a small question about Deepglobe dataset HOT 1
- Why the input_size of backbone is set to the number of 508×508 on the DeepGlobe dataset experiment? HOT 1
- About the result of deepglobe dataset HOT 5
- demo.py: error: --sub_batch_size HOT 2
- About the Gleason dataset
- Questions about Binary Semantic Segmentation HOT 2
- the inputs of the "refinement model" are different between train.py and test.py HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from magnet.