Giter VIP home page Giter VIP logo

Comments (10)

Craya avatar Craya commented on May 25, 2024 1

Test with precision: fp16:
100%|██████████| 7879/7879 [33:26<00:00, 3.93it/s, train_loss=3.14]

Test with precision: bf16:
100%|██████████| 7879/7879 [31:58<00:00, 4.11it/s, train_loss=2.51]

Thanks a lot @Adel-Moumen & @asumagic , you solved my problem faster than 2 epochs!

from speechbrain.

Craya avatar Craya commented on May 25, 2024 1

@Adel-Moumen As described in the issue, we are performing a custom FT with our own dataset, not the default CV FT training.

Now the full training is ended, I can confirm that:

  • performances are better on SB1.0 wrt SB 0.5.15 (probably due to the improved data augmentation and the kenlm model)
  • there is no performance differences between precision : fp32 and precision: bf16

Thanks a lot for your help.

from speechbrain.

Adel-Moumen avatar Adel-Moumen commented on May 25, 2024

Hello @Craya, thanks you very much for reporting to us this issue. I am pinging @asumagic here since the error is related to the streaming inference which is using features from a very recent version of torchaudio.

May I ask you if you used --precision with the values fp16/bf16 with SB 1.0? You should see a very nice speedup.

from speechbrain.

Craya avatar Craya commented on May 25, 2024

Hi @Adel-Moumen ,

I used precision: fp32 as defined in the original recipe, I'll try with fp16/bf16 immediately, thanks for the tip.

Fabien.

from speechbrain.

Adel-Moumen avatar Adel-Moumen commented on May 25, 2024

Hi @Adel-Moumen ,

I used precision: fp32 as defined in the original recipe, I'll try with fp16/bf16 immediately, thanks for the tip.

Fabien.

Good. Also, do you mind sharing with me the exact commit hash of your speechbrain version? Are you using the latest SB version available in the dev branch? We fixed some slowness issues linked to the torchaudio resampler and maybe this is why you were seeing some slowness.

Regarding the results that you obtained, do you confirm that you are unable to FT a wav2vec on the CV CTC recipe template? If so, I will try to retrain one myself and investigate what is happening.

from speechbrain.

asumagic avatar asumagic commented on May 25, 2024

Hello @Craya, thanks you very much for reporting to us this issue. I am pinging @asumagic here since the error is related to the streaming inference which is using features from a very recent version of torchaudio.

May I ask you if you used --precision with the values fp16/bf16 with SB 1.0? You should see a very nice speedup.

Correct, the issue is those type annotations again even if you don't use that code... Will fix ASAP.

In the mean time, you can work the issue by removing this type annotation. Only the inference interfaces are affected, the new torchaudio features are only necessary for the ffmpeg streaming functionality.

from speechbrain.

asumagic avatar asumagic commented on May 25, 2024

As for the training speed issue, this might be relevant: https://pytorch.org/blog/pytorch-1.12-released/#changes-to-float32-matrix-multiplication-precision-on-ampere-and-later-cuda-hardware

Using fp16/bf16 autocast as described should resolve the issue. For fp32 training, torch.backends.cuda.matmul.allow_tf32 = True would restore the 1.11 behavior.

from speechbrain.

Adel-Moumen avatar Adel-Moumen commented on May 25, 2024

Using fp16/bf16 autocast as described should resolve the issue. For fp32 training,

Interesting. I guess we should re-introduce torch.backends.cuda.matmul.allow_tf32 = True ? I never really understood why we weren't using it.

from speechbrain.

asumagic avatar asumagic commented on May 25, 2024

Using fp16/bf16 autocast as described should resolve the issue. For fp32 training,

Interesting. I guess we should re-introduce torch.backends.cuda.matmul.allow_tf32 = True ? I never really understood why we weren't using it.

I don't think there was an explicit decision not to do it in SpeechBrain, more that it was never brought up.
It lowers the precision of the matmul in a hardware-dependent way, which seems to be PyTorch's rationale for making this change.
I think it makes more sense to recommend defaulting to fp16/bf16, but I don't know if we might have any models that are not tolerant to fp16 autocast. And if so, I don't know if they would work with tf32 matmul (presumably, they would work).

from speechbrain.

Adel-Moumen avatar Adel-Moumen commented on May 25, 2024

Test with precision: fp16: 100%|██████████| 7879/7879 [33:26<00:00, 3.93it/s, train_loss=3.14]

Test with precision: bf16: 100%|██████████| 7879/7879 [31:58<00:00, 4.11it/s, train_loss=2.51]

Thanks a lot @Adel-Moumen & @asumagic , you solved my problem faster than 2 epochs!

Np. Keep me posted about the final results. And could you please let me know if you are still experiencing bad results on SB 1.0 on CV CTC? If so, I can take a deeper look.

from speechbrain.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.