Comments (26)
Hello,
Any update on this issue ? Is a single worker that runs using multi-gpus in a distributed multi-node setting possible now ?
Is there an example ?
from models.
Here is my understanding of https://github.com/tensorflow/models/blob/master/slim/deployment/model_deploy.py
Let's say you have 2 workers and 1 parameter server, and each worker has 4 clones (GPUs). The worker aggregates the clone gradients then sends them to the parameter server. Then the PS updates the weights. This is all good.
The problem is the GPUs share the weights at PS, so each GPU fetches weights independently at the next forward pass. This generates a lot of traffic due to communications between every GPU and the PS. It would probably be faster to limit the connections at the level of worker and PS, such that individual GPU does not talk with PS directly. Once the weights reach a worker, they are distributed among clones internally. Distributed Caffe does exactly this kind of hierarchical broadcast. It saves quite a bit network bandwidth if you have multiple GPUs in a worker.
from models.
/cc @windreamer
from models.
I'm now wondering if my idea above is wrong, that is, it is better that we use multiple workers controlling each GPU on the machine than we use one worker to control all the GPUs on that machine.
from models.
Your idea is right. I modify codes which can training with each worker(replica) corresponding to one GPU, it works for this issue.
from models.
@Stewarttzy So how much is the training speed improved ? And can you show us how you implement it, please ?
from models.
I tried running 16 workers on 2 machines with K80 GPU, and 2 ps jobs, one for each machine.
The training is much slower than that of running just 2 workers.
@ZhuFengdaaa Have you solve the speed issue?
from models.
@heibaidaolx123 No, I find the same problem as you did. The more workers, slower the speed. @sguada I have seen you said at another issue that there will be a performance update for Tensorflow, when will it be released ?
from models.
Did you find a solution to your question?
from models.
Running 2 PS is a bad idea, since the variables are assigned in round-robin fashion, all the weights go to one PS while all the biases go to the other PS. When using PS make sure the load is balanced. You should be able to use either 1 PS or 3 PS for better balance.
TF 0.9 release should increase the speed, we are working in the multi-gpu multi-replica case.
from models.
@sguada I've just tried TF 0.9, and the training remained as slow as using TF 0.8.
I also tried using just one PS, and got no improvement.
To make it clear, I used 'imagenet_distributed_train.py'. I have 2 machines, each with 8 GPUs and linked with IB. And I set CUDA_VISIBLE_DEVICES to run multiple worker on a single machine.
For 16 workers and 2 PS (equally distributed on 2 nodes), the speed is about 8.3 examples/sec for each worker, and totally about 133 examples/sec.
For 16 workers and 1 PS, almost the same speed as above.
For 8 workers and 1 PS on a single machine, the speed is about 11 examples/sec for each worker, and totally 88 examples/sec.
For 4 workers and 1 PS on a single machine, the speed is about 18 examples/sec for each worker, and totally 72 examples/sec.
For 2 workers and 1 PS on a single machine, the speed is about 20 examples/sec for each worker, and totally 40 examples/sec.
For 1 workers and 1 PS on a single machine, the speed is about 22 examples/sec for each worker.
So for imagenet distributed training, more workers lead to slower speed for a single worker.
from models.
Is there any answer to this question?
@ZhuFengdaaa You said in your question that:
... The way I currently used for using all GPU on a worker machine is starting the number of workers that equal to the number of GPUs. ...
How do you make this work? Suppose there are two machines, each with 4 GPUs, thus overall 8 GPUs. So do you mean you start 8 workers to employ all GPUs? Do you still need to explicitly assign different workers to different devices in your codes, e.g. using tf.device
context manager? Could you introduce your approach in more detail?
from models.
@AIROBOTAI Yes, I start 8 workers, and assign each workers to each device explicitly use tf.device
.
from models.
@sguada What's the status of this issue?
from models.
@junshi15 I think you are right about internal weight distributing. Btw, it seems that Caffe does not have distributed version yet.
from models.
@AIROBOTAI You are correct, the official BVLC Caffe does not extend beyond a single node. At the risk of self-promotion, I was referring to Yahoo version of it (https://github.com/yahoo/caffe/tree/master), being part of CaffeOnSpark (https://github.com/yahoo/CaffeOnSpark). Both ethernet and infiniband connections are supported.
from models.
@junshi15 Thx for your clarificatioin!
from models.
Hi @ZhuFengdaaa, I found your modified distributed_train.py (if this is how you use multi-gpu on each replica) and wrote a comment there. Since distributed TF only needs one chief
, thus I think changing is_chief = (FLAGS.task_id == 0)
to is_chief = (FLAGS.task_id == 0 and FLAGS.gpu_id == 0)
is better. Could anyone comment on this?
from models.
@ZhuFengdaaa sorry, just realized that task_id
is unique for each worker, so your codes are right.
from models.
Hi @heibaidaolx123, I'd like to know if you have tested the speed benchmark using TF v1.0
. And does the speed get faster?
from models.
Hi, does anyone know how to start 2 workers and each worker control 8 GPU, is there any example code to follow?
from models.
@weixsong https://github.com/tensorflow/benchmarks/tree/master/scripts/tf_cnn_benchmarks
from models.
@ppwwyyxx The benchmarks only perform "in-graph replication" across the GPUs in a single worker and asynchronous training across workers.
Is there any way to change it to perform synchronously training across workers? like mentioned in:https://stackoverflow.com/questions/39595747/tensorflow-distributed-training-hybrid-with-multi-gpu-methodology
Any hints will be appreciated.
from models.
@ZhuFengdaaa can you share your example code for "I start 8 workers, and assign each workers to each device explicitly use tf.device“ , it is exactly same problem i meet, thanks a lot
from models.
Closing this issue. It's straightforward to use multiple GPUs in each replica; just build a graph which assigns work to all the GPUs. There are utilities to do this (used by the inception_train file pointed to in the top).
Higher-level APIs to make this easier are being worked on.
Please reopen if you think it's too soon to close this.
from models.
hello, could you share me with your the code of inception_distribute_train.py. I also meet this problem. Thanks very much @ZhuFengdaaa
from models.
Related Issues (20)
- tf1 upgrade to tf2,tf.distribute.MirroredStrategy core dump HOT 3
- i cant run movinet streaming official model on vs code running windows, it keep throwing an error HOT 12
- Keras 3 compatibility? AttributeError: 'LossScaleOptimizer' object has no attribute 'get_scaled_loss' HOT 7
- tensorflow.python.framework.errors_impl.NotFoundError: inference.so not found. HOT 4
- TypeError: unhashable type: 'list'. When running "model_builder_tf2_test.py" HOT 8
- ValueError: Only fixed_shape_resizeris supported with tflite. Found keep_aspect_ratio_resize HOT 2
- object_detection : python setup.py egg_info did not run successfully installing the object detection API HOT 2
- Apache Beam Pipeline cannot maximize the number of workers for criteo_preprocess.py in Google Cloud
- SSD output structure
- ImportError: cannot import name 'eval_pb2' from 'object_detection.protos' HOT 6
- Unable to install tf-models-official HOT 5
- GradCAM for MoViNet HOT 2
- models/research/object_detection/exporter_main_v2.py give me error "ImportError: cannot import name 'builder'" HOT 1
- Trying to Load Keras Model Returns ListIndex Error HOT 4
- several errors (4) on movinet streaming_model_training_and_inference notebook when simply ran through on kaggle and colab HOT 7
- Issue with link to MaskConver implementation HOT 1
- How to add additional class (QR Codes) for coco dataset or any suggested dataset for the class I have mentioned HOT 4
- Keras 3 compatibility! HOT 1
- error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. HOT 6
- A model training problem for plant disease detection HOT 10
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from models.