Comments (10)
Test with precision: fp16
:
100%|██████████| 7879/7879 [33:26<00:00, 3.93it/s, train_loss=3.14]
Test with precision: bf16
:
100%|██████████| 7879/7879 [31:58<00:00, 4.11it/s, train_loss=2.51]
Thanks a lot @Adel-Moumen & @asumagic , you solved my problem faster than 2 epochs!
from speechbrain.
@Adel-Moumen As described in the issue, we are performing a custom FT with our own dataset, not the default CV FT training.
Now the full training is ended, I can confirm that:
- performances are better on SB1.0 wrt SB 0.5.15 (probably due to the improved data augmentation and the kenlm model)
- there is no performance differences between
precision : fp32
andprecision: bf16
Thanks a lot for your help.
from speechbrain.
Hello @Craya, thanks you very much for reporting to us this issue. I am pinging @asumagic here since the error is related to the streaming inference which is using features from a very recent version of torchaudio.
May I ask you if you used --precision
with the values fp16/bf16
with SB 1.0? You should see a very nice speedup.
from speechbrain.
Hi @Adel-Moumen ,
I used precision: fp32
as defined in the original recipe, I'll try with fp16/bf16
immediately, thanks for the tip.
Fabien.
from speechbrain.
Hi @Adel-Moumen ,
I used
precision: fp32
as defined in the original recipe, I'll try withfp16/bf16
immediately, thanks for the tip.Fabien.
Good. Also, do you mind sharing with me the exact commit hash of your speechbrain version? Are you using the latest SB version available in the dev branch? We fixed some slowness issues linked to the torchaudio resampler and maybe this is why you were seeing some slowness.
Regarding the results that you obtained, do you confirm that you are unable to FT a wav2vec on the CV CTC recipe template? If so, I will try to retrain one myself and investigate what is happening.
from speechbrain.
Hello @Craya, thanks you very much for reporting to us this issue. I am pinging @asumagic here since the error is related to the streaming inference which is using features from a very recent version of torchaudio.
May I ask you if you used
--precision
with the valuesfp16/bf16
with SB 1.0? You should see a very nice speedup.
Correct, the issue is those type annotations again even if you don't use that code... Will fix ASAP.
In the mean time, you can work the issue by removing this type annotation. Only the inference interfaces are affected, the new torchaudio features are only necessary for the ffmpeg streaming functionality.
from speechbrain.
As for the training speed issue, this might be relevant: https://pytorch.org/blog/pytorch-1.12-released/#changes-to-float32-matrix-multiplication-precision-on-ampere-and-later-cuda-hardware
Using fp16/bf16 autocast as described should resolve the issue. For fp32 training, torch.backends.cuda.matmul.allow_tf32 = True
would restore the 1.11 behavior.
from speechbrain.
Using fp16/bf16 autocast as described should resolve the issue. For fp32 training,
Interesting. I guess we should re-introduce torch.backends.cuda.matmul.allow_tf32 = True
? I never really understood why we weren't using it.
from speechbrain.
Using fp16/bf16 autocast as described should resolve the issue. For fp32 training,
Interesting. I guess we should re-introduce
torch.backends.cuda.matmul.allow_tf32 = True
? I never really understood why we weren't using it.
I don't think there was an explicit decision not to do it in SpeechBrain, more that it was never brought up.
It lowers the precision of the matmul in a hardware-dependent way, which seems to be PyTorch's rationale for making this change.
I think it makes more sense to recommend defaulting to fp16/bf16, but I don't know if we might have any models that are not tolerant to fp16 autocast. And if so, I don't know if they would work with tf32 matmul (presumably, they would work).
from speechbrain.
Test with
precision: fp16
:100%|██████████| 7879/7879 [33:26<00:00, 3.93it/s, train_loss=3.14]
Test with
precision: bf16
:100%|██████████| 7879/7879 [31:58<00:00, 4.11it/s, train_loss=2.51]
Thanks a lot @Adel-Moumen & @asumagic , you solved my problem faster than 2 epochs!
Np. Keep me posted about the final results. And could you please let me know if you are still experiencing bad results on SB 1.0 on CV CTC? If so, I can take a deeper look.
from speechbrain.
Related Issues (20)
- [Feature Request]: AdaMER-CTC for ASR task training
- Cannot reproduce DPRNN results on WSJ0-2Mix (Speech Separation) HOT 6
- ModuleNotFoundError: No module named 'speechbrain.pretrained' HOT 4
- Fix obsolete uses of `speechbrain.pretrained` in documentation
- `speechbrain/__init__.py` is ignored in pip editable installs for scripts out of repo HOT 1
- PyPI install incorrectly ships a `tests` package
- Circular Import Error HOT 8
- Circular import in ESC-50 classification recipe HOT 2
- Tacotron2.decoder.infer behaves incorrectly HOT 2
- Can't reproduce pretraining results for Wav2vec2 using LibriSpeech recipe HOT 9
- RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! HOT 2
- not able to import 'HuggingFaceWhisper' from speechbrain.lobes.models.huggingface_whisper HOT 7
- Adapters + LLama -- re-design. HOT 6
- Torch 2.3 breaks DDP? HOT 3
- Training regression for Conformer-Transducer models HOT 2
- Math Domain Error in Pretraining tutorial. HOT 1
- Typing syntax not supported in 3.7/3.8 HOT 8
- Potential `SpectrogramDrop` bugs
- dtype mismatch in AttentiveStatisticsPooling with FP16 training mode
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from speechbrain.