Comments (6)
Hi, how many stages are you doing the training for?
from confusion.
Referring to your paper, I think it is just one stage to train the CNN, so it is one stage to train the densenet, without freezing the conv parameters. At this moment, the performances of densenet can be 84.3% without PC, and 85.4% with PC. The input of PC is the output of softmax. But it also cannot achieve the result in your paper. So I think some tricks are missing?
from confusion.
Hi, I think I have some problem with my code, so I can not reproduce the result of densenet161 with pytorch.
I use the pretrained Densenet161 to train on CUB200, with standard SGD, linear-decay of learning rate, initial learning rate 0.1, 40k iterate, 32 batch size. My result is 83.38%, and your paper is 84.21%.And then, I add the pairwiseconfusion to the loss like : loss = criterion(y_, y) + 10* PairwiseConfusion(y_). And the result is 83.58%, and your paper is 86.87%.
Can you help me to reproduce the result? Thank you.
Hello, I fine tune densenet161 on the bird dataset in two stages, i.e., freezing conv layers&fine tune classifier, fine tune all layers. But the highest top1 is just 78%. When I fine tune all layers directly in one stage, it could even be worse, only 50%+ accuracy. My hyper parameters are the same as yours in above. Is there anything need to be take care? Thank you!
from confusion.
Hi, I think I have some problem with my code, so I can not reproduce the result of densenet161 with pytorch.
I use the pretrained Densenet161 to train on CUB200, with standard SGD, linear-decay of learning rate, initial learning rate 0.1, 40k iterate, 32 batch size. My result is 83.38%, and your paper is 84.21%.
And then, I add the pairwiseconfusion to the loss like : loss = criterion(y_, y) + 10* PairwiseConfusion(y_). And the result is 83.58%, and your paper is 86.87%.
Can you help me to reproduce the result? Thank you.Hello, I fine tune densenet161 on the bird dataset in two stages, i.e., freezing conv layers&fine tune classifier, fine tune all layers. But the highest top1 is just 78%. When I fine tune all layers directly in one stage, it could even be worse, only 50%+ accuracy. My hyper parameters are the same as yours in above. Is there anything need to be take care? Thank you!
I think you should calculate the confusion loss after softmax. And I just train the network in one stage without any tricks. However, I can not get the accuracy of the paper and I think I also lose some tricks>
from confusion.
Hi, I think I have some problem with my code, so I can not reproduce the result of densenet161 with pytorch.
I use the pretrained Densenet161 to train on CUB200, with standard SGD, linear-decay of learning rate, initial learning rate 0.1, 40k iterate, 32 batch size. My result is 83.38%, and your paper is 84.21%.
And then, I add the pairwiseconfusion to the loss like : loss = criterion(y_, y) + 10* PairwiseConfusion(y_). And the result is 83.58%, and your paper is 86.87%.
Can you help me to reproduce the result? Thank you.Hello, I fine tune densenet161 on the bird dataset in two stages, i.e., freezing conv layers&fine tune classifier, fine tune all layers. But the highest top1 is just 78%. When I fine tune all layers directly in one stage, it could even be worse, only 50%+ accuracy. My hyper parameters are the same as yours in above. Is there anything need to be take care? Thank you!
I think you should calculate the confusion loss after softmax. And I just train the network in one stage without any tricks. However, I can not get the accuracy of the paper and I think I also lose some tricks>
Thanks for your reply. I fine tune the densenet161 in one stage with different lr in backbone and classifier and get your result, 83.38%. However, the new problem is when I add confusion loss which is calculated after softmax, no matter fine tune the network from Imagenet params or 83.38% params, it could not get improved at all. Can you share your pytorch code with me in email or whatever way? My email is [email protected] . Thank you very much!
from confusion.
@goldentimecoolk
did you manage to get the pytorch code working as the paper?
Thank you
from confusion.
Related Issues (12)
- Implementation issue for bilinear VGG HOT 1
- about train problems HOT 2
- How to avoid log zero in EntropicConfusion ?
- Logits or softmax probabilities HOT 3
- About the training process HOT 3
- training encounter with NAN HOT 9
- [HELP] : The loss_train is always 5.3833 HOT 7
- is RPN_PRE_NMS_TOP_N per image?
- Wish HOT 1
- pytorch loss HOT 1
- Ask for help๏ผ Compile error! HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from confusion.