Comments (2)
The OLMoForCausalLM
in hf_olmo
and the OlmoForCausalLM
in transformers are different models. The latter have -hf
suffix for their repos. You are trying to load a checkpoint of the latter type into the former model, and hence seeing failures.
AutoModelForCausalLM
should load both types of checkpoints properly without warnings (you just need to also import hf_olmo
for the former type). Can you share the exact warning message you get when you run AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B-0424-hf")
?
More context: https://github.com/allenai/OLMo/blob/main/docs/Checkpoints.md
from olmo.
Oh I see, thanks for the clarification! I was confused about the casing difference, too, but this explains it. I don't get any warning when I use AutoModelForCausalLM
; was using that as an example of something that's working fine.
I'm currently subclassing hf_olmo
's OLMoForCausalLM
with some custom inference-time hooks, but I want to load the intermediate training checkpoints, which are only available for OLMo-7B-0424-hf
and not OLMo-7B-0424
from what I can see on the HF landing page. Is there any plan to add the checkpoints to the other model version?
Also, maybe we should link this markdown doc from the official HF landing page for the model checkpoints. Might be helpful to others too.
from olmo.
Related Issues (20)
- Key 'https://olmo_checkpoints' not in 'TrainConfig' HOT 1
- How the 1B and 7B model are initialized?
- Tokenizer with relative path import fails when using olmo as pip library
- Multi node training
- Resuming training on unsharded checkpoint HOT 5
- What did OLMo 1B converge to? HOT 1
- Issue with tokenizer wrapper
- start_index not getting reset in data loader when moving to new epoch HOT 4
- Cannot convert internal OLMo checkpoint to HF HOT 2
- Can long text be splitted into short texts?
- Is there explicitly instruction-following data in the version of Dolma used to train v1? HOT 1
- DDP training tries to save sharded checkpoint on the last step
- Does global_train_batch_size support gradient accumulation? HOT 1
- mlp_ratio not adjusted in config if mlp_hidden_size is set
- Initial Loss increased from 10 (0.3.0 v) to 60 (0.4.0) ! HOT 9
- Model ladder has no documentation
- why CrossEntropyLoss is zero,i HOT 2
- Gflops computation is faulty for FSDP due to bug in `OLMo.num_params()`
- Number of tokens Olmo-1B was trained: 2T or 3T?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from olmo.