Comments (6)
My rule of thumb is if your losses are > 1.0 for early [1-3] layers, calibration data is off or tokenizer is not properly configured. Each module in each layer has it's own loss trend in my experience. Some modules just are harder to quantize. MOE models are the worst-case for gptq due the gating/router layer.
- use running quant avg loss as guide to usable quant
- run ppl after quant for test 1
- human eval test for test 2
from autogptq.
My rule of thumb is if your losses are > 1.0 for early [1-3] layers, calibration data is off or tokenizer is not properly configured. Each module in each layer has it's own loss trend in my experience. Some modules just are harder to quantize. MOE models are the worst-case for gptq due the gating/router layer.
- use running quant avg loss as guide to usable quant
- run ppl after quant for test 1
- human eval test for test 2
Thanks for your quick reply! @Qubitium
My losses are lower than 0.05 in the first three layers as you mentioned above, but will eventually turns to above 10.0 in the last 40 layers, is it normal in your experience?
(ps: my model is a finetuned verison of Qwen-72b-chat, which has 80 layers in total.)
I'll test the 2 tests you mentioned above after i finish quantizing my model.
from autogptq.
Related Issues (20)
- [FEATURE] ADD Support DBRX HOT 16
- zeros remain zero?
- [FEATURE] ADD Jamba Support
- [BUG] Can not save quantized model to disk: "you shouldn't move a model that is dispatched using accelerate hooks." HOT 1
- Error when trying to quantize the JAIS model. HOT 10
- [BUG] GPTQ Kernels dont work with PEFT
- Error when quantizing mixtral 8x7b model. "ZeroDivisionError: float division by zero " HOT 1
- TypeError: forward() missing 1 required positional argument: 'hidden_states'[BUG] ? HOT 3
- [BUG]GPTQ QWEN-72B-Chat HOT 4
- gptq 4bit avg loss is large HOT 3
- export mistral8x7b error
- [BUG] Llama 3 8B Instruct - `no_inject_fused_attention` must be true or else errors out HOT 8
- Why doesn't AutoGPTQ quantize lm_head layer? HOT 5
- Why LLaMA3-8B after GPTQ test in wikitext2 so bad? HOT 8
- [PR Ready for Review] [FEATURE] Extend Support for Phi-3
- [FEATURE] Backport vllm expanded Marlin kernel to autogptq. HOT 1
- [DEPRECATION] Discussion on Fused attention and QiGEN HOT 5
- Llama-3 8B Instruct quantized to 8 Bit spits out gibberish in transformers `model.generate()` but works fine in vLLM? HOT 5
- [BUG]safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from autogptq.