Comments (17)
Ideally, distributed related tests should run in both NVLink machine and PCIe machine.
from vllm.
https://buildkite.com/vllm/ci/builds/7222#018f7121-bd93-456f-ae01-25e0a4e63061
(eager_allreduce pid=3040) INFO 05-13 08:48:35 utils.py:132] reading GPU P2P access cache from /root/.config/vllm/gpu_p2p_access_cache_for_0,1.json
(eager_allreduce pid=3040) WARNING 05-13 08:48:35 custom_all_reduce.py:166] Custom allreduce is disabled because your platform lacks GPU P2P capability or P2P test failed. To silence this warning, specify disable_custom_all_reduce=True explicitly.
FAILED
test_custom_all_reduce.py::test_custom_allreduce[test_target0-2-2] SKIPPED2024-05-13 08:48:35,962 ERROR worker.py:406 -- Unhandled error (suppress with 'RAY_IGNORE_UNHANDLED_ERRORS=1'): ray::eager_allreduce() (pid=3040, ip=10.68.0.191)
The log above shows that it didn't pass the p2p check (but it passed the nvlink check). Therefore, there might be some non-hardware problems in buildkit agent machine?
from vllm.
The log above shows that it didn't pass the p2p check (but it passed the nvlink check)
It skips nvlink check because it directly read p2p cache file rather than testing it.
from vllm.
@youkaichao Custom allreduce also works with 2 PCIe cards as a special case
from vllm.
@hanzhi713 do you mean custom allreduce with full_nvlink=False
? Is it still more performant than nccl
?
from vllm.
@hanzhi713 do you mean custom allreduce with
full_nvlink=False
? Is it still more performant thannccl
?
It's more performant than NCCL when either
- there are only two PCIe GPUs (they can be connected to the PCIe root complex directly or with a PCIe switch), or
- there are multiple PCIe GPUs connected to the same PCIe switch.
Currently, only case 1 is enabled.
from vllm.
Case 2 is not enabled/currently supported because the memory model of multiple GPUs over PCIe fabric is not very well documented. I'm afraid that we'll run into some memory ordering/visibility issues.
#2760 (comment) here's a comment made regarding the performance with more than two PCIe GPUs.
from vllm.
For non-NVLink GPUs, do they need to have p2p access for custom allreduce to work?
In our CI machine, with 2/4 * L4, it seems it does not have p2p access. The machine topology is:
GPU0 GPU1 GPU2 GPU3 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PHB PHB PHB 0-47 N/A N/A
GPU1 PHB X PHB PHB 0-47 N/A N/A
GPU2 PHB PHB X PHB 0-47 N/A N/A
GPU3 PHB PHB PHB X 0-47 N/A N/ALegend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
from vllm.
Empirically, I find p2p access is only available for PIX and NV# connection.
from vllm.
Yes. You need P2P access for custom allreduce to work. Not all PCIe platforms support this feature. I have a bunch of A30, A10 and T4 machines and the topology is all SYS, but they do support PCIe P2P.
from vllm.
I'm quite confused about how to detect p2p access ability.
On my L4 * 2 machine, torch.cuda.can_device_access_peer(0, 1) == False
, but _can_actually_p2p(0, 1) == True
.
import torch
print(torch.cuda.can_device_access_peer(0, 1)) # False
def _can_actually_p2p(idx_a, idx_b):
dev_i = f"cuda:{idx_a}"
dev_j = f"cuda:{idx_b}"
a = torch.randn(5, device=dev_i) + 123.0
b = a.to(dev_j)
c = b.to(dev_i)
return torch.all(a == c).cpu().item()
print(_can_actually_p2p(0, 1)) # True
from vllm.
I believe cudaMemcpyPeer implements something like this
if can_device_access_peer:
use_p2p_memcpy()
else:
use_fallback_implementation_that_goes_through_host_mem()
So even if there's no p2p support, it still might work. We use it to check when p2p is supported, it actually works and produces the correct result.
from vllm.
Here is another script:
import torch
import torch.distributed as dist
dist.init_process_group(backend='nccl', init_method='env://')
torch.cuda.set_device(dist.get_rank())
data = torch.zeros(2, 2, device='cuda') + dist.get_rank() + 1
def share_cuda_tensor(data, src, rank):
if rank == src:
func, args = torch.multiprocessing.reductions.reduce_tensor(data)
dist.broadcast_object_list([[func, args]], src)
else:
recv = [None]
dist.broadcast_object_list(recv, src)
func, args = recv[0]
data = func(*args)
return data
data = share_cuda_tensor(data, 0, dist.get_rank())
if dist.get_rank() == 1:
data += 1
dist.barrier()
print(f"Rank {dist.get_rank()} has data {data}")
The torch.multiprocessing.reductions.reduce_tensor(data)
internally uses t.untyped_storage()._share_cuda_()
, which unconditionally uses cudaIpcGetMemHandle
. It still succeeds.
check https://github.com/pytorch/pytorch/blob/de42af4b0087118cf5527261c532927efcb9a0df/torch/csrc/StorageSharing.cpp#L324 for details.
My question is, why torch.cuda.can_device_access_peer(0, 1) == False
, but it can still uses cudaIpcGetMemHandle
for sharing cuda tensors.
from vllm.
And, if torch.cuda.can_device_access_peer(0, 1) == False
, but _can_actually_p2p(0, 1) == True
, what is the rationale for testing _can_actually_p2p
then? It is always True
.
from vllm.
It's not always true. can_device_access_peer=True
does not mean that P2P is correctly supported, i.e. driver can be buggy. vLLM is run on all sorts of consumer hardware and there are those edge cases that we must pay attention. It's not our problem. It's Nvidia's problem and _can_actually_p2p
is our workaround.
See also pytorch/pytorch#119638 for a discussion on this.
from vllm.
Not quite sure why can_device_access_peer=False
but cudaIpc can still be used. This is the part where documentation or clarification is really lacking from Nvidia.
The doc says "Maps memory exported from another process with cudaIpcGetMemHandle into the current device address space. For contexts on different devices cudaIpcOpenMemHandle can attempt to enable peer access between the devices as if the user called cudaDeviceEnablePeerAccess. This behavior is controlled by the cudaIpcMemLazyEnablePeerAccess flag. cudaDeviceCanAccessPeer can determine if a mapping is possible." and that is what I assume: IPC is only possible if cudaDeviceEnablePeerAccess returns True.
from vllm.
IPC is only possible if cudaDeviceEnablePeerAccess returns True.
I would say, IPC + p2p is only possible if cudaDeviceEnablePeerAccess returns True. There is another case, that processes can use IPC in the same GPU, which is how pytorch uses it.
from vllm.
Related Issues (20)
- [Bug]: Server gets stuck during startup, last logline is 'Using XFormers backend' HOT 3
- [Bug]: :vllm-0.4.2 pagedattention running error
- [Bug]: Unsloth model inference not working well
- [Installation]: failed installation with pip HOT 1
- [Usage]: How to start vLLM on a particular GPU? HOT 9
- [New Model]: DeepSeek VL
- [Bug]: benchmarking serving returns index -1 is out of bounds HOT 6
- [Bug]: Loading mistral-7B-instruct-v03 KeyError: 'layers.0.attention.wk.weight' HOT 5
- [New Model]: microsoft/Phi-3-small-128k-instruct HOT 1
- [Feature]: Chunked prefill + lora
- [Usage]: There is no response after the "GPU P2P capability or P2P test failed" warning is displayed. What can I do?
- [Performance]: Splitting model across GPUs with varying vRAM
- [Bug]: Confuse with ray implements HOT 1
- [Bug]: 0.4.2 error on H20
- [Feature]: Request for support of InternLM2ForCausalLM lora loading
- [Feature]: Tensor Parallelism with non divisble amount of attention heads HOT 5
- [Performance]: Vllm performance on L40s GPU HOT 2
- [Bug]: OpenAI LogProbs format for Chat-Completion is incorrect HOT 5
- [New Model]: tiiuae/falcon-11B
- [Usage]: Run local models using vLLM HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from vllm.