Giter VIP home page Giter VIP logo

Comments (6)

BinWone avatar BinWone commented on July 29, 2024 1

What do you mean by the training time is the same? Is the perplexity the same at the end of a few epochs? Or do you look at the number of words per second? The number of words per second in the log is given per GPU, so this will be the same. But the loss / perplexity should decrease much faster.

yes, i made a mistake. you are right, multi-gpu training get better valid ppl and acc.
pretraining on 1 GPU:
image
on 4 GPU
image

from xlm.

hpsun1109 avatar hpsun1109 commented on July 29, 2024

another question, in the UNMT model, only one encoder and one decoder? Thanks.

from xlm.

glample avatar glample commented on July 29, 2024

You should not handle the --local_rank yourself. You can use the following command to train with multi-GPU: https://github.com/facebookresearch/XLM#how-can-i-run-experiments-on-multiple-gpus

export NGPU=8; python -m torch.distributed.launch --nproc_per_node=$NGPU train.py ARGUMENTS

And no, there are 2 separate models for UNMT, one encoder and one decoder, but they are initialized with the same weights (apart from the parameters of the source attention in the decoder that remain randomly initialized).

from xlm.

BinWone avatar BinWone commented on July 29, 2024

You should not handle the --local_rank yourself. You can use the following command to train with multi-GPU: https://github.com/facebookresearch/XLM#how-can-i-run-experiments-on-multiple-gpus

export NGPU=8; python -m torch.distributed.launch --nproc_per_node=$NGPU train.py ARGUMENTS

And no, there are 2 separate models for UNMT, one encoder and one decoder, but they are initialized with the same weights (apart from the parameters of the source attention in the decoder that remain randomly initialized).

I using multi-GPU to pre-training the model like export NGPU=8; python -m torch.distributed.launch --nproc_per_node=$NGPU train.py ARGUMENTS, it just run the same job on 8 GPUs, the training time is the same as training on 1 GPU, it doesn't fast the pre-training process.
How to set the params and I can fast the training process on multi-GPU?

from xlm.

glample avatar glample commented on July 29, 2024

What do you mean by the training time is the same? Is the perplexity the same at the end of a few epochs? Or do you look at the number of words per second? The number of words per second in the log is given per GPU, so this will be the same. But the loss / perplexity should decrease much faster.

from xlm.

glample avatar glample commented on July 29, 2024

Looks good :)

from xlm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.