Comments (9)
Hi, thanks @TParcollet for the explanation, it's clearer now. Thanks @Adel-Moumen for pointing out the flag.
I am currently pretraining with
--grad_accumulation_factor=2
andmax_batch_length=400
on 8 gpus yielding 2 * 400 * 8 = 6400 (~1.8h).Here's the logs for the first epoch: epoch: 1, steps: 4611, lr: 7.68e-05 - train loss: 4.84e+04 - valid loss: 2.86e+03, valid accuracy: 0.26230588555336
Seems to be similar to our model checkpoint. Note that now you have done during your first epoch "only" 4611 steps meaning that the training will go for much longer. I do expect that you'll get better results.
BTW, are you using --precision=fp16
for the pre-training?
from speechbrain.
Hello @GasserElbanna, thanks a lot for opening this issue!
Could you please @TParcollet and/or @salah-zaiem have a look? Thanks a lot :)
from speechbrain.
Hi, it's important that the total batch size corresponds to roughly 1.6h. By changing the gradient accumulation factor your can adjust this.
from speechbrain.
Hello, thank you for the quick response. I used the default config file for pre-training. So, I am assuming these are the parameters below I need to adjust?
Dynamic Batching parameters:
max_batch_length: 200 # Fits in a 32GB GPUs (V100)
num_buckets: 70
shuffle: True # if true re-creates batches at each epoch shuffling examples.
batch_ordering: random
from speechbrain.
@Adel-Moumen i see that the gradient accumulation factor is missing on this recipe. Could you add it? (No need to PR imho push directly to develop).
@GasserElbanna have a look at any other yaml for asr in the libri folder, you will find the gradient accumulation factor param. Just copy and past it in this yaml, anywhere. Then play with grad accum / max batch len to make sure that you have 1.2-1.6h of speech per batch. Grad_accum * max_batch_len * nb gpu = 1.6h.
Also, your A100 must certainly be able to accommodate more than 200s.
from speechbrain.
@Adel-Moumen i see that the gradient accumulation factor is missing on this recipe. Could you add it? (No need to PR imho push directly to develop).
Why would it be missing? By default, grad_accumulation_factor
is set to 1
(see: https://github.com/speechbrain/speechbrain/blob/develop/speechbrain/core.py#L84). The var is called in each fit_batch
call (see: https://github.com/speechbrain/speechbrain/blob/develop/speechbrain/core.py#L1199). As grad_accumulation_factor
can also be set through a flag (see: https://github.com/speechbrain/speechbrain/blob/develop/speechbrain/core.py#L422-L426) the recipe is technically not missing from this feature. You just need to play with --grad_accumulation_factor=N
where N
is the grad acc steps.
from speechbrain.
Hi, thanks @TParcollet for the explanation, it's clearer now.
Thanks @Adel-Moumen for pointing out the flag.
I am currently pretraining with --grad_accumulation_factor=2
and max_batch_length=400
on 8 gpus yielding 2 * 400 * 8 = 6400 (~1.8h).
Here's the logs for the first epoch:
epoch: 1, steps: 4611, lr: 7.68e-05 - train loss: 4.84e+04 - valid loss: 2.86e+03, valid accuracy: 0.26230588555336
from speechbrain.
BTW, are you using --precision=fp16 for the pre-training?
I am using fp32 now.
from speechbrain.
fp16 or bf16 would make the training much faster if you have a compatible GPU.
from speechbrain.
Related Issues (20)
- Circular Import Error HOT 8
- Circular import in ESC-50 classification recipe HOT 2
- Tacotron2.decoder.infer behaves incorrectly HOT 2
- RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! HOT 2
- not able to import 'HuggingFaceWhisper' from speechbrain.lobes.models.huggingface_whisper HOT 7
- Adapters + LLama -- re-design. HOT 6
- Torch 2.3 breaks DDP? HOT 3
- Training twice as long with Torch > 1.11 HOT 10
- Training regression for Conformer-Transducer models HOT 2
- Math Domain Error in Pretraining tutorial. HOT 1
- Typing syntax not supported in 3.7/3.8 HOT 8
- Potential `SpectrogramDrop` bugs
- dtype mismatch in AttentiveStatisticsPooling with FP16 training mode
- Task ASR Reported: Caught ZeroDivisionError in DataLoader worker process 0. HOT 4
- Huggingface-Aishell get wrong prediction HOT 1
- AMD ROCm: Conformer-transducer diverges HOT 1
- AMD ROCm: `torch.backends.cudnn.benchmark` should be set to `False` by default on ROCm
- Wav2Vec2WordpieceTokenizer' object has no attribute '_create_trie'
- Same result for different samples (with same name) using speech separation HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from speechbrain.