Comments (17)
Did you try to finetune the resnet?
from gcnet.
I have the similar problem. When training from scratch, the model converges very slowly.
from gcnet.
I have the similar problem. When training from scratch, the model converges very slowly.
Me too
from gcnet.
Sorry for the late reply.
At the first place, we don't have enough resources for training from scratch.
Afterwards, we tried train the whole network from scratch on ImageNet. We didn't observe similar issue. I suggested you train it for 110 or 120 epoch to see the finally performance.
Noted that we use the same augmentation method as SENet.
from gcnet.
@xvjiarui
Hi! In terms of the fusion setting, did the case you mentioned use the 'add' one or the 'scale' one?
from gcnet.
We use the 'add' one by default. On the other hand, 'scale' is similar to SENet. Both of them should not have converge issue.
from gcnet.
@xvjiarui
Thx for your reply. Btw, would you mind sharing the classification training codes in this repo? That would be of great help.
from gcnet.
@xvjiarui
Thx for your reply. Btw, would you mind sharing the classification training codes in this repo? That would be of great help.
Our internal code base is used for classification training. We may try to release a cleaned version in the future. But it is not on schedule yet.
The block structure is the same. You could simply add it to your own code.
from gcnet.
Using the best setting of GC-ResNet50 and train it from scratch on ImageNet, I found it will be stuck in a high loss in the early epochs before the training loss begins to decline normally. Therefore the final result is much lower than original ResNet50. Note that one difference from the original paper is that the GC modules are embedded in each bottleneck exactly as SE does, for a fair comparison.
Does anyone have the same problem?
This may be the case since the authors report the ImageNet results via a finetuning setting, which is not very common when validating models on ImageNet Benchmarks. At least all other modules (SE, SK, BAM, CBAM, AA) are following a training-from-scratch setting.
Hi! Did you solve the problem?
from gcnet.
@xvjiarui
I am also trying to use gc block on classifier such as resnet, vgg16 etc and I would like to be sure that I am doing thing good.
First : in the resnet backbone, we just need to use the global context block before the downsample in the bottleneck/basicblock class
Second : for the global contexxt module, the inplane parameter is the depth of the feature map which will feed the gc module and the plane parameter is equal to inplane//16, is it right ?
regarding the parameter "pool", I suppose "att" is better ? And for the "fusions" parameter "channel add" is also better ? Why this parameter take only a list? I am not sure to understand.
from gcnet.
For all the yes/no questions, the answers are yes. You are understanding correctly.
The fusions as a list indicate that multiple fusion methods could be used together.
from gcnet.
For all the yes/no questions, the answers are yes. You are understanding correctly.
The fusions as a list indicate that multiple fusion methods could be used together.
@xvjiarui
Hi, thanks a lot for your great work. I appreciate it. However, I tried to train the network on ImageNet and GC achieves a worse performance than original ResNet. Then I follow the paper that fine-tune the ResNet50 for other 40 epochs using cosine schedule, and the performance is still bad. Could you please share you fine-tuning code, and I can cite your work in my research. Thanks a lot.
from gcnet.
Hi, Thanks for your reply.
Do you still have the model trained on imagenet with gcblock ?
from gcnet.
Hi, @13952522076
Sorry for the late reply.
Currently, I am too busy to release that part of the code.
If the issue is overfitting, you are suggested to adopt the augmentations in the original paper as well as the drop out on the gc block branch.
from gcnet.
Hi, @Shiro-LK
The models are not available for now. You will be informed when I train them again.
from gcnet.
Hi, @13952522076
Sorry for the late reply.
Currently, I am too busy to release that part of the code.
If the issue is overfitting, you are suggested to adopt the augmentations in the original paper as well as the drop out on the gc block branch.
Thanks a lot for your reply. Looks like not the issue of overfitting (train loss vs. val loss). Anyway, appreciate your work and it helped a lot. 😺
from gcnet.
@ma-xu Hello, excuse me, has the problem been solved?
from gcnet.
Related Issues (20)
- What‘s the value of transform module mean? HOT 3
- Did anyone use GCNet on Optical Flow features?
- How can I use gc block in resnet18? HOT 3
- Simplified NL HOT 1
- gcnet performs not good on segmentation tasks. HOT 1
- Questions about training
- AP, AR=-1 while evaluation at the end of each epoch HOT 8
- Attention maps in Different query position HOT 1
- GCNet with pretrianed model on COCO detection?
- Could it be used in 3D data? HOT 1
- possible replacements for layernorm HOT 1
- 找个GC Block这么难? HOT 3
- Mask for training
- Visualization code wanted HOT 1
- change mask-rcnn to faster-rcnn?
- Does GCNet have 1d? HOT 1
- some of the problems HOT 3
- Add location based on yolov7
- Welcome update to OpenMMLab 2.0
- Weight download problem
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gcnet.