Comments (5)
Hi Spencer,
I could replicate the issue with the test command, which was related to an update in transformers
. This should be fixed now.
I cannot replicate a dynamo problem with the final recipe. What GPU are you testing this with? One debug version you can try is to set impl._inductor_vars=null
.
from cramming.
Thanks for the quick response and update. I have now tried this on a few different GPU's (none of which are covered in your paper unfortunately) and have run into a few different issues, some of which is expected and some of which is unexpected.
Titan X & 1080 Ti: Things work great without using compile_torch=False
. I now get the following (expected) error with compile_torch=True
:
torch._dynamo.exc.BackendCompilerFailed: debug_wrapper raised RuntimeError: Found NVIDIA TITAN X (Pascal) which is too old to be supported by the triton GPU compiler, which is used as the backend. Triton only supports devices of CUDA Capability >= 7.0, but your device is of CUDA capability 6.1
So the GPUs are just too old to use Triton which is fine.
A10: I spun up an A10 instance on LambdaLabs, which doesn't have py3.9+ installed by default. So I create a python 3.9 virtual environment as recommended here then try the test code
python pretrain.py name=test arch=hf-bert-base train=bert-base data=sanity-check-2 dryrun=True impl.microbatch_size=2
but it results in this error. Adding the null inductor vars flag as you recommend) gives this error
Any thoughts?
from cramming.
Hm, these seem more like general torch.compile
problems. For the Titan X and 1080ti, you can try to disable triton parts which I'm manually enabling (set impl._inductor_vars.triton.cudagraphs=False
).
For the A10, it seems like an environment/installation problem, where torch.compile
cannot find the right C headers, I've never used a LambdaLabs instance, are you able to compile simpler models?
from cramming.
Re: the Titan X and 1080ti, unfortunately disabling the triton cudagraphs still results in the following error (full terminal output here):
raise BackendCompilerFailed(self.compiler_fn, e) from e torch._dynamo.exc.BackendCompilerFailed: debug_wrapper raised AssertionError: While executing %self_encoder_layers_0_attn_self_attention_query_key_value : [#users=1] = call_module[target=self_encoder_layers_0_attn_self_attention_query_key_value](args = (%self_encoder_layers_0_norm1,), kwargs = {})
I have been able to use LambdaLabs A10's with torch's compile option on other language model training code, e.g. nanoGPT. However, for that I only needed python 3.8, so I didn't have to make a new virtual env with python 3.9. What is the reason python 3.9-3.10 is needed for cramming? Is there a difficulty with allowing 3.8?
from cramming.
Ok, thanks for checking. With the test for assert utils.has_triton()
being required, torch compile really is doomed on the older cards.
Regarding python versions, lower versions of python are untested on my side, they might work! There might be some type hinting generics though that would require fixing with from __future__ import annotations
.
from cramming.
Related Issues (20)
- RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)` while running evaluation HOT 10
- Pretraining on a single RTX 3060 HOT 2
- Errors with both the verify installation command as well as the final recipe HOT 2
- GLUE evaluation numbers are very poor, if increase the sequence length to 512 and float 32 HOT 5
- Evaluation failed on MNLI and STSB Datasets for Last1.13release HOT 3
- I run the test command,got this error,how to fix it?looks like no dataset HOT 12
- Tutorial for pretrain RoBERTa with custom data HOT 2
- Question about sparse token prediction HOT 1
- Uploading trained model to HF/saving in HF format locally HOT 8
- Finetuning for SQuAD task HOT 2
- try it on Mac M1 but failed HOT 2
- can't import cramming HOT 2
- TypeError: _load_optimizer() missing 1 required positional argument: 'initial_time' HOT 1
- torch._dynamo error on step 2: calling compiler function 'inductor' HOT 7
- Finetuning for token classification HOT 3
- Configs for GPT? HOT 2
- From PR 43 HOT 5
- Unable to replicate the results using the default command HOT 15
- How to load local data HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cramming.