Giter VIP home page Giter VIP logo

Comments (26)

manjunaths avatar manjunaths commented on May 4, 2024 9

Hello,
Any update on this issue ? Is a single worker that runs using multi-gpus in a distributed multi-node setting possible now ?

Is there an example ?

from models.

junshi15 avatar junshi15 commented on May 4, 2024 2

Here is my understanding of https://github.com/tensorflow/models/blob/master/slim/deployment/model_deploy.py

Let's say you have 2 workers and 1 parameter server, and each worker has 4 clones (GPUs). The worker aggregates the clone gradients then sends them to the parameter server. Then the PS updates the weights. This is all good.

The problem is the GPUs share the weights at PS, so each GPU fetches weights independently at the next forward pass. This generates a lot of traffic due to communications between every GPU and the PS. It would probably be faster to limit the connections at the level of worker and PS, such that individual GPU does not talk with PS directly. Once the weights reach a worker, they are distributed among clones internally. Distributed Caffe does exactly this kind of hierarchical broadcast. It saves quite a bit network bandwidth if you have multiple GPUs in a worker.

from models.

bhack avatar bhack commented on May 4, 2024

/cc @windreamer

from models.

ZhuFengdaaa avatar ZhuFengdaaa commented on May 4, 2024

I'm now wondering if my idea above is wrong, that is, it is better that we use multiple workers controlling each GPU on the machine than we use one worker to control all the GPUs on that machine.

from models.

ZhiyuTan88 avatar ZhiyuTan88 commented on May 4, 2024

Your idea is right. I modify codes which can training with each worker(replica) corresponding to one GPU, it works for this issue.

from models.

ZhuFengdaaa avatar ZhuFengdaaa commented on May 4, 2024

@Stewarttzy So how much is the training speed improved ? And can you show us how you implement it, please ?

from models.

heibaidaolx123 avatar heibaidaolx123 commented on May 4, 2024

I tried running 16 workers on 2 machines with K80 GPU, and 2 ps jobs, one for each machine.
The training is much slower than that of running just 2 workers.
@ZhuFengdaaa Have you solve the speed issue?

from models.

ZhuFengdaaa avatar ZhuFengdaaa commented on May 4, 2024

@heibaidaolx123 No, I find the same problem as you did. The more workers, slower the speed. @sguada I have seen you said at another issue that there will be a performance update for Tensorflow, when will it be released ?

from models.

aselle avatar aselle commented on May 4, 2024

Did you find a solution to your question?

from models.

sguada avatar sguada commented on May 4, 2024

Running 2 PS is a bad idea, since the variables are assigned in round-robin fashion, all the weights go to one PS while all the biases go to the other PS. When using PS make sure the load is balanced. You should be able to use either 1 PS or 3 PS for better balance.

TF 0.9 release should increase the speed, we are working in the multi-gpu multi-replica case.

from models.

heibaidaolx123 avatar heibaidaolx123 commented on May 4, 2024

@sguada I've just tried TF 0.9, and the training remained as slow as using TF 0.8.
I also tried using just one PS, and got no improvement.
To make it clear, I used 'imagenet_distributed_train.py'. I have 2 machines, each with 8 GPUs and linked with IB. And I set CUDA_VISIBLE_DEVICES to run multiple worker on a single machine.
For 16 workers and 2 PS (equally distributed on 2 nodes), the speed is about 8.3 examples/sec for each worker, and totally about 133 examples/sec.
For 16 workers and 1 PS, almost the same speed as above.
For 8 workers and 1 PS on a single machine, the speed is about 11 examples/sec for each worker, and totally 88 examples/sec.
For 4 workers and 1 PS on a single machine, the speed is about 18 examples/sec for each worker, and totally 72 examples/sec.
For 2 workers and 1 PS on a single machine, the speed is about 20 examples/sec for each worker, and totally 40 examples/sec.
For 1 workers and 1 PS on a single machine, the speed is about 22 examples/sec for each worker.
So for imagenet distributed training, more workers lead to slower speed for a single worker.

from models.

AIROBOTAI avatar AIROBOTAI commented on May 4, 2024

Is there any answer to this question?
@ZhuFengdaaa You said in your question that:

... The way I currently used for using all GPU on a worker machine is starting the number of workers that equal to the number of GPUs. ...

How do you make this work? Suppose there are two machines, each with 4 GPUs, thus overall 8 GPUs. So do you mean you start 8 workers to employ all GPUs? Do you still need to explicitly assign different workers to different devices in your codes, e.g. using tf.device context manager? Could you introduce your approach in more detail?

from models.

ZhuFengdaaa avatar ZhuFengdaaa commented on May 4, 2024

@AIROBOTAI Yes, I start 8 workers, and assign each workers to each device explicitly use tf.device.

from models.

girving avatar girving commented on May 4, 2024

@sguada What's the status of this issue?

from models.

AIROBOTAI avatar AIROBOTAI commented on May 4, 2024

@junshi15 I think you are right about internal weight distributing. Btw, it seems that Caffe does not have distributed version yet.

from models.

junshi15 avatar junshi15 commented on May 4, 2024

@AIROBOTAI You are correct, the official BVLC Caffe does not extend beyond a single node. At the risk of self-promotion, I was referring to Yahoo version of it (https://github.com/yahoo/caffe/tree/master), being part of CaffeOnSpark (https://github.com/yahoo/CaffeOnSpark). Both ethernet and infiniband connections are supported.

from models.

AIROBOTAI avatar AIROBOTAI commented on May 4, 2024

@junshi15 Thx for your clarificatioin!

from models.

AIROBOTAI avatar AIROBOTAI commented on May 4, 2024

Hi @ZhuFengdaaa, I found your modified distributed_train.py (if this is how you use multi-gpu on each replica) and wrote a comment there. Since distributed TF only needs one chief, thus I think changing is_chief = (FLAGS.task_id == 0) to is_chief = (FLAGS.task_id == 0 and FLAGS.gpu_id == 0) is better. Could anyone comment on this?

from models.

AIROBOTAI avatar AIROBOTAI commented on May 4, 2024

@ZhuFengdaaa sorry, just realized that task_id is unique for each worker, so your codes are right.

from models.

AIROBOTAI avatar AIROBOTAI commented on May 4, 2024

Hi @heibaidaolx123, I'd like to know if you have tested the speed benchmark using TF v1.0. And does the speed get faster?

from models.

weixsong avatar weixsong commented on May 4, 2024

Hi, does anyone know how to start 2 workers and each worker control 8 GPU, is there any example code to follow?

from models.

ppwwyyxx avatar ppwwyyxx commented on May 4, 2024

@weixsong https://github.com/tensorflow/benchmarks/tree/master/scripts/tf_cnn_benchmarks

from models.

jke-zq avatar jke-zq commented on May 4, 2024

@ppwwyyxx The benchmarks only perform "in-graph replication" across the GPUs in a single worker and asynchronous training across workers.
Is there any way to change it to perform synchronously training across workers? like mentioned in:https://stackoverflow.com/questions/39595747/tensorflow-distributed-training-hybrid-with-multi-gpu-methodology
Any hints will be appreciated.

from models.

ccyjava avatar ccyjava commented on May 4, 2024

@ZhuFengdaaa can you share your example code for "I start 8 workers, and assign each workers to each device explicitly use tf.device“ , it is exactly same problem i meet, thanks a lot

from models.

alextp avatar alextp commented on May 4, 2024

Closing this issue. It's straightforward to use multiple GPUs in each replica; just build a graph which assigns work to all the GPUs. There are utilities to do this (used by the inception_train file pointed to in the top).

Higher-level APIs to make this easier are being worked on.

Please reopen if you think it's too soon to close this.

from models.

cheng-wen-long avatar cheng-wen-long commented on May 4, 2024

hello, could you share me with your the code of inception_distribute_train.py. I also meet this problem. Thanks very much @ZhuFengdaaa

from models.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.