Giter VIP home page Giter VIP logo

Comments (9)

Adel-Moumen avatar Adel-Moumen commented on May 25, 2024 1

Hi, thanks @TParcollet for the explanation, it's clearer now. Thanks @Adel-Moumen for pointing out the flag.

I am currently pretraining with --grad_accumulation_factor=2 and max_batch_length=400 on 8 gpus yielding 2 * 400 * 8 = 6400 (~1.8h).

Here's the logs for the first epoch: epoch: 1, steps: 4611, lr: 7.68e-05 - train loss: 4.84e+04 - valid loss: 2.86e+03, valid accuracy: 0.26230588555336

Seems to be similar to our model checkpoint. Note that now you have done during your first epoch "only" 4611 steps meaning that the training will go for much longer. I do expect that you'll get better results.

BTW, are you using --precision=fp16 for the pre-training?

from speechbrain.

Adel-Moumen avatar Adel-Moumen commented on May 25, 2024

Hello @GasserElbanna, thanks a lot for opening this issue!

Could you please @TParcollet and/or @salah-zaiem have a look? Thanks a lot :)

from speechbrain.

TParcollet avatar TParcollet commented on May 25, 2024

Hi, it's important that the total batch size corresponds to roughly 1.6h. By changing the gradient accumulation factor your can adjust this.

from speechbrain.

GasserElbanna avatar GasserElbanna commented on May 25, 2024

Hello, thank you for the quick response. I used the default config file for pre-training. So, I am assuming these are the parameters below I need to adjust?

Dynamic Batching parameters:
max_batch_length: 200 # Fits in a 32GB GPUs (V100)
num_buckets: 70
shuffle: True # if true re-creates batches at each epoch shuffling examples.
batch_ordering: random

from speechbrain.

TParcollet avatar TParcollet commented on May 25, 2024

@Adel-Moumen i see that the gradient accumulation factor is missing on this recipe. Could you add it? (No need to PR imho push directly to develop).

@GasserElbanna have a look at any other yaml for asr in the libri folder, you will find the gradient accumulation factor param. Just copy and past it in this yaml, anywhere. Then play with grad accum / max batch len to make sure that you have 1.2-1.6h of speech per batch. Grad_accum * max_batch_len * nb gpu = 1.6h.

Also, your A100 must certainly be able to accommodate more than 200s.

from speechbrain.

Adel-Moumen avatar Adel-Moumen commented on May 25, 2024

@Adel-Moumen i see that the gradient accumulation factor is missing on this recipe. Could you add it? (No need to PR imho push directly to develop).

Why would it be missing? By default, grad_accumulation_factor is set to 1 (see: https://github.com/speechbrain/speechbrain/blob/develop/speechbrain/core.py#L84). The var is called in each fit_batch call (see: https://github.com/speechbrain/speechbrain/blob/develop/speechbrain/core.py#L1199). As grad_accumulation_factor can also be set through a flag (see: https://github.com/speechbrain/speechbrain/blob/develop/speechbrain/core.py#L422-L426) the recipe is technically not missing from this feature. You just need to play with --grad_accumulation_factor=N where N is the grad acc steps.

from speechbrain.

GasserElbanna avatar GasserElbanna commented on May 25, 2024

Hi, thanks @TParcollet for the explanation, it's clearer now.
Thanks @Adel-Moumen for pointing out the flag.

I am currently pretraining with --grad_accumulation_factor=2 and max_batch_length=400 on 8 gpus yielding 2 * 400 * 8 = 6400 (~1.8h).

Here's the logs for the first epoch:
epoch: 1, steps: 4611, lr: 7.68e-05 - train loss: 4.84e+04 - valid loss: 2.86e+03, valid accuracy: 0.26230588555336

from speechbrain.

GasserElbanna avatar GasserElbanna commented on May 25, 2024

BTW, are you using --precision=fp16 for the pre-training?

I am using fp32 now.

from speechbrain.

TParcollet avatar TParcollet commented on May 25, 2024

fp16 or bf16 would make the training much faster if you have a compatible GPU.

from speechbrain.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.