Comments (4)
Hi, have your tried to use a lower learning rate?
from apot_quantization.
Hello, mostafaelhoushi,
I train our model using our internal framework, so the training codes may have some bugs. Thanks for discovering this problem, here are the suggestions and maybe we can find the problem together:
Try to use full precision model, by setting bit=32:
IF the model can be trained, then the problem must be quantization.
Try to not learn the clipping threshold, just set the LR of alpha to 0, and see if the model can be trained.
IF the FP model cannot be trained, then the problem must be hyper-parameters
Try to use a lower LR
from apot_quantization.
Thanks @yhhhli . I played around with learning rate and batch size. When I set the batch size to 128 and learning rate 0.001, the training accuracy starts with around 70% and reaches 89% soon. There could be a better combination of learning rate and batch size.
Just a side note: looking at the code, the default batch size seems to be 1024. However, when we run main.py
without setting batch size, the number of batches in the log shown in the screenshot is 256, 234
. If the size of Imagenet training set is around 1 million, then this means the batch size is about 4.
I don't know how the code seems to have a default batch size of 1024 but when running the code the actual default batch size is 4. Batch size of 4 is expected to cause this degradation in accuracy with the default learning rate.
from apot_quantization.
I found the cause of the problem, I mistakenly used -b 5
to set the bitwidth to 5, while -b
actually sets the batch size. Sorry for that!
from apot_quantization.
Related Issues (20)
- Why the size of Res20_2bit is the same as Res20_32bit? HOT 1
- uniform_quantization HOT 1
- Size and accuracy HOT 5
- Lightning Integration
- Technical details HOT 2
- about uniform quantization HOT 2
- about CIFAR10 part main.py resume function HOT 2
- the precision a4w4 of training MobilenetV2 is nearly 0 HOT 4
- a4w4 Resnet18 is 1.7% lower than that in the paper?
- The MUL unit of APOT HOT 1
- Need Suggestion
- Some results about resnet20 on cifar10
- quantization bit of apot HOT 2
- about training time
- difference between paper and code in quan_layer HOT 2
- calculate MAC
- Hyper-Params on MobileNet_V2 HOT 1
- The migration of this QAT function? HOT 5
- NaN loss for 8bit HOT 1
- Differences between quant_layer.py HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from apot_quantization.