zhuzilin / ring-flash-attention Goto Github PK
View Code? Open in Web Editor NEWRing attention implementation with flash attention
Ring attention implementation with flash attention
There are some arithmetic errors with the current implementation. The reason for them is probably that flash attention will return bf16 value for each block, so we cannot accumluate the values with the original fp32 ones.
如果使用bf16精度,不是fp32精度,就不存在accumluate the values with the original fp32 ones.
?
ring attention本质是flash attention的分布式版本,flash attentionV2里面会维护softmax分母但是在更新out的时候好像只会更新最大值不会更新分母用于减少计算吧?在Q和一圈KV算完了以后最后除以一个softmax的global分母就可以了,所以作者这个ring attention实现的分布式FA可以理解成是v1版本的FA吗?
我尝试在多机多卡上训练,发现耗时相比单机上要增加很多,想同训练环境下相比Deepspeed Ulysses耗时增加了三倍,而单机上却没有这个问题,请问是什么原因导致的呢?
@zhuzilin 请教一下楼主,为什么没有stripe_flash_attn_varlen_func的实现?
作者您好,我尝试将您的代码集成到Megatron-LM上,使用的时候设置了tensor parallelism,同时代码中还对q,k,v进行了分块(local_q),请问最终应该怎么聚合呢
Hello ! Megtron uses flash_attn_varlen_func function,have you ever written flash_attn_varlen_func version
torchrun --nproc_per_node 4 benchmark/benchmark_varlen_qkvpacked_func.py
# flash_attn_varlen_qkvpacked_func
618.3720160243046 iter/s, 0.16171495056152344 sec
# ring_flash_attn_varlen_qkvpacked_func
80.5184792465389 iter/s, 1.241950927734375 sec
# zigzag_ring_flash_attn_varlen_qkvpacked_func
63.748601413892935 iter/s, 1.568661865234375 sec
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.182.03 Driver Version: 470.182.03 CUDA Version: 12.3 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A100-SXM... On | 00000000:03:00.0 Off | 0 |
| N/A 18C P0 49W / 400W | 3MiB / 40536MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA A100-SXM... On | 00000000:05:00.0 Off | 0 |
| N/A 17C P0 49W / 400W | 3MiB / 40536MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 2 NVIDIA A100-SXM... On | 00000000:15:00.0 Off | 0 |
| N/A 15C P0 49W / 400W | 3MiB / 40536MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 3 NVIDIA A100-SXM... On | 00000000:1E:00.0 Off | 0 |
| N/A 17C P0 48W / 400W | 3MiB / 40536MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
I tested on the 8*A800 machine, and below are my test results.
flash_attn_qkvpacked_func : 464.177 iter/s, 0.215 sec
ring_flash_attn_qkvpacked_func: 28.712 iter/s, 3.483 sec
zigzag_ring_flash_attn_qkvpacked_func: 36.878 iter/s, 2.712 sec
stripe_flash_attn_qkvpacked_func: 37.264 iter/s, 2.684 sec
Compared to the A100 performance in the readme, zigzag
only achieves 63.5% of the original flash_attn performance. Is this result reasonable?
I ran the test script on my single node machine, where I saw the memory cost of each attention:
benchmark_forward(flash_attn_qkvpacked_func) # 2k. gpu memory
benchmark_forward(ring_flash_attn_qkvpacked_func) # 4k8. gpu memory
benchmark_forward(ring_flash_attn_qkvpacked_func_v2) # 14k. gpu memory
benchmark_forward(zigzag_ring_flash_attn_qkvpacked_func) # 5k. gpu memory
And the running time of each attention:
flash_attn_qkvpacked_func 4470.3194011016185 iter/s, 0.22369766235351562 sec
ring_flash_attn_qkvpacked_func 327.3293995259765 iter/s, 3.0550265312194824 sec
ring_flash_attn_qkvpacked_func_v2 310.57613547187344 iter/s, 3.219822406768799 sec
zigzag_ring_flash_attn_qkvpacked_func 340.40177158407965 iter/s, 2.9377050399780273 sec
Is there something wrong?
Hey @zhuzilin
is it possible to add mask to zigzag flash attention?
您好,多卡多batch微调qwen模型时,将sequence切分到不同卡上,由于存在attention mask,最后一张卡上的qkv的维度会和其它卡上不对应,最后导致nccl通信超时,针对这种情况应该怎么修改?
Hi~ @zhuzilin
我正在尝试将BPT 接入ring flash attention,使用chunk_size切分qkv,在local进行更小chunk的attention计算。
参照ring_flash_attn.py的forward和backward,实现了 blockwise_flash_attn_forward
和 blockwise_flash_attn_backward
,目前forward精度可以对齐,backward存在误差。我想问一下,backward的实现可能存在哪些问题?
下面是我的实现:
def blockwise_flash_attn_forward(
q: torch.Tensor,
k: torch.Tensor,
v: torch.Tensor,
q_chunk_size: int,
k_chunk_size: int,
softmax_scale,
dropout_p=0,
causal=True,
return_softmax=True
):
assert q.shape[1] % q_chunk_size == 0
assert k.shape[1] % k_chunk_size == 0
num_q_chunk = q.shape[1] // q_chunk_size
num_k_chunk = k.shape[1] // k_chunk_size
batch,seqlen,head_dim,num_head = q.shape
block_out = torch.empty(q.shape, dtype=torch.float32, device=q.device)
block_lse = torch.empty((batch,seqlen,head_dim,1), dtype=torch.float32, device=q.device)
for i in range(num_q_chunk):
q_i = q[:,i * q_chunk_size: (i + 1) * q_chunk_size]
out_i = None
lse_i = None
for j in range(num_k_chunk-1,-1,-1):
if j > i and causal:
continue
k_j = k[:,j * k_chunk_size: (j + 1) * k_chunk_size]
v_j = v[:,j * k_chunk_size: (j + 1) * k_chunk_size]
out_ij, _, _, _, _, lse_ij, _, _ = _flash_attn_forward(
q_i,
k_j,
v_j,
dropout_p,
softmax_scale,
causal=causal and j == i,
return_softmax=return_softmax and dropout_p > 0
)
out_i, lse_i = update_out_and_lse(out_i, lse_i, out_ij, lse_ij)
block_out[:, i * q_chunk_size: (i + 1) * q_chunk_size] = out_i
block_lse[:, i * q_chunk_size: (i + 1) * q_chunk_size] = lse_i
return block_out, block_lse.squeeze(dim=-1).transpose(-1,-2)
def blockwise_flash_attn_backward(
dout,
q,
k,
v,
out,
q_chunk_size,
k_chunk_size,
softmax_lse,
dq,
dk,
dv,
softmax_scale,
dropout_p,
causal=True,
rng_state=None
):
assert q.shape[1] % q_chunk_size == 0
assert k.shape[1] % k_chunk_size == 0
num_q_chunk = q.shape[1] // q_chunk_size
num_k_chunk = k.shape[1] // k_chunk_size
temp_dq_buffer = torch.empty(q[:,:q_chunk_size].shape, dtype=q.dtype, device=q.device)
temp_dk_buffer = torch.empty(k[:,:k_chunk_size].shape, dtype=k.dtype, device=k.device)
temp_dv_buffer = torch.empty(v[:,:k_chunk_size].shape, dtype=v.dtype, device=v.device)
for i in range(num_q_chunk):
q_i = q[:,i * q_chunk_size: (i + 1) * q_chunk_size]
dout_i = dout[:,i * q_chunk_size: (i + 1) * q_chunk_size]
out_i = out[:,i * q_chunk_size: (i + 1) * q_chunk_size]
softmax_lse_i = softmax_lse[:,:,i * q_chunk_size: (i + 1) * q_chunk_size]
q_i = q_i.contiguous()
dout_i = dout_i.contiguous()
out_i = out_i.contiguous()
softmax_lse_i = softmax_lse_i.contiguous()
for j in range(num_k_chunk):
k_j = k[:,j * k_chunk_size: (j + 1) * k_chunk_size]
v_j = v[:,j * k_chunk_size: (j + 1) * k_chunk_size]
k_j = k_j.contiguous()
v_j = v_j.contiguous()
if j > i and causal:
continue
_flash_attn_backward(
dout_i,
q_i,
k_j,
v_j,
out_i,
softmax_lse_i,
temp_dq_buffer,
temp_dk_buffer,
temp_dv_buffer,
dropout_p,
softmax_scale,
causal = causal and j == i,
rng_state=rng_state,
)
# update dq dk dv
dq[:,i * q_chunk_size: (i + 1) * q_chunk_size] += temp_dq_buffer
dk[:,j * k_chunk_size: (j + 1) * k_chunk_size] += temp_dk_buffer
dv[:,j * k_chunk_size: (j + 1) * k_chunk_size] += temp_dv_buffer
分别替换ring_flash_attn_forward 中的_flash_attn_forward,和ring_flash_attn_backward中的_flash_attn_backward
下面是我的测试结果:
##############################
# forward:
##############################
out: max 2.896484375, mean 0.0203094482421875
lse: max 10.417832374572754, mean 9.204237937927246
out diff:
[0] max 0.00048828125, mean 8.881092071533203e-06
[1] max 0.0001220703125, mean 7.450580596923828e-06
[2] max 0.0001220703125, mean 5.9604644775390625e-06
[3] max 6.103515625e-05, mean 5.066394805908203e-06
[4] max 6.103515625e-05, mean 4.5299530029296875e-06
[5] max 6.103515625e-05, mean 4.112720489501953e-06
[6] max 6.103515625e-05, mean 3.814697265625e-06
[7] max 6.103515625e-05, mean 3.516674041748047e-06
lse diff:
[0] max 9.5367431640625e-07, mean 1.645181413323371e-07
[1] max 9.5367431640625e-07, mean 2.641230878452916e-07
[2] max 1.9073486328125e-06, mean 3.0044466825529526e-07
[3] max 1.9073486328125e-06, mean 3.3890827921823075e-07
[4] max 1.9073486328125e-06, mean 3.8137659430503845e-07
[5] max 1.9073486328125e-06, mean 4.0913002408160537e-07
[6] max 1.9073486328125e-06, mean 4.272908142866072e-07
[7] max 1.9073486328125e-06, mean 4.6798959374427795e-07
##############################
# backward:
##############################
load_dq:
[0] max 2.783203125, mean 0.052520751953125
[1] max 0.3310546875, mean 0.02398681640625
[2] max 0.2083740234375, mean 0.0184478759765625
[3] max 0.1162109375, mean 0.0155792236328125
[4] max 0.13330078125, mean 0.01374053955078125
[5] max 0.1204833984375, mean 0.01241302490234375
[6] max 0.11260986328125, mean 0.0114288330078125
[7] max 0.0775146484375, mean 0.01064300537109375
dq diff:
[0] max 0.005859375, mean 7.49826431274414e-05
[1] max 0.186279296875, mean 0.01239776611328125
[2] max 0.1973876953125, mean 0.01953125
[3] max 0.235107421875, mean 0.0253143310546875
[4] max 0.30615234375, mean 0.0301361083984375
[5] max 0.52392578125, mean 0.03436279296875
[6] max 0.56689453125, mean 0.038177490234375
[7] max 0.3955078125, mean 0.041748046875
load_dk:
[0] max 2.654296875, mean 0.05340576171875
[1] max 0.256591796875, mean 0.021697998046875
[2] max 0.169921875, mean 0.01535797119140625
[3] max 0.13330078125, mean 0.0116729736328125
[4] max 0.09124755859375, mean 0.0090484619140625
[5] max 0.1158447265625, mean 0.006908416748046875
[6] max 0.050384521484375, mean 0.00492095947265625
[7] max 0.03936767578125, mean 0.002498626708984375
dk diff:
[0] max 0.253173828125, mean 0.03192138671875
[1] max 0.16845703125, mean 0.0232696533203125
[2] max 0.130126953125, mean 0.017364501953125
[3] max 0.1097412109375, mean 0.012786865234375
[4] max 0.10797119140625, mean 0.00893402099609375
[5] max 0.049530029296875, mean 0.005580902099609375
[6] max 0.039337158203125, mean 0.002498626708984375
[7] max 1.52587890625e-05, mean 3.5762786865234375e-07
load_dv:
[0] max 5.89453125, mean 0.05450439453125
[1] max 0.1951904296875, mean 0.021484375
[2] max 0.11883544921875, mean 0.01525115966796875
[3] max 0.10003662109375, mean 0.01158905029296875
[4] max 0.07550048828125, mean 0.00901031494140625
[5] max 0.06658935546875, mean 0.006816864013671875
[6] max 0.041015625, mean 0.00492095947265625
[7] max 0.041961669921875, mean 0.002475738525390625
dv diff:
[0] max 0.3232421875, mean 0.042572021484375
[1] max 0.21240234375, mean 0.03094482421875
[2] max 0.1527099609375, mean 0.0223236083984375
[3] max 0.1075439453125, mean 0.015625
[4] max 0.08245849609375, mean 0.010223388671875
[5] max 0.0447998046875, mean 0.005950927734375
[6] max 0.0419921875, mean 0.002475738525390625
[7] max 3.0517578125e-05, mean 3.5762786865234375e-07
拜读了您的代码,有点小疑问:
从代码中看,调用flash attn算子只是完成了当前block的attn计算,其中的max是局部的,在后面的update_out_and_lse函数中也没有看到使用全局max更新的逻辑,是否是有问题?
https://github.com/lhao499/RingAttention/blob/90e920affeb634188b8a6fc491608164e1c135b3/bpt/ring_attention.py#L96 中有根据pre_max_score进行计算的逻辑。
Thank you for the excellent work.
Could you explain line 19 in the function?
I think it shoud be new_lse = lse + block_lse
Is there any analytic or numerical reason?
ring-flash-attention/ring_flash_attn/utils.py
Lines 9 to 24 in 7895974
I use 4 gpus to run the code. my command is
torchrun --nproc_per_node 4 test/test_ring_flash_attn_varlen_func.py
my error is
rank1]: Traceback (most recent call last):
[rank1]: File "/home/xxxx/ring-flash-attention/test/test_ring_flash_attn_varlen_func.py", line 126, in <module>
[rank1]: lse_list = extract_lse(lse, cu_seqlens)
[rank1]: File "/home/xxxx/ring-flash-attention/test/test_ring_flash_attn_varlen_func.py", line 57, in extract_lse
[rank1]: value = lse[i, :, : end - start]
[rank1]: IndexError: too many indices for tensor of dimension 2
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/xxxx/ring-flash-attention/test/test_ring_flash_attn_varlen_func.py", line 126, in <module>
[rank0]: lse_list = extract_lse(lse, cu_seqlens)
[rank0]: File "/home/xxxx/ring-flash-attention/test/test_ring_flash_attn_varlen_func.py", line 57, in extract_lse
[rank0]: value = lse[i, :, : end - start]
[rank0]: IndexError: too many indices for tensor of dimension 2
[rank2]: Traceback (most recent call last):
[rank2]: File "/home/xxxx/ring-flash-attention/test/test_ring_flash_attn_varlen_func.py", line 126, in <module>
[rank2]: lse_list = extract_lse(lse, cu_seqlens)
[rank2]: File "/home/xxxx/ring-flash-attention/test/test_ring_flash_attn_varlen_func.py", line 57, in extract_lse
[rank2]: value = lse[i, :, : end - start]
[rank2]: IndexError: too many indices for tensor of dimension 2
In the backward function of ring-attn, rng_state does not use the value from forward function, but directly passes in None.
Does this indicate that ring-attn does not support dropout?
大神 有办法在推理的decoding阶段,结合kv cache使用吗?
Hi, Zilin, I'm curious if there is a design document or a conceptual design available for the solution that supports variable length with ring attention?
Currently the implementation will split the input sequence into n blocks, e.g. 4 gpu will split into:
b0 | b1 | b2 | b3
however, this will result in uneven calculation, where the gpu that has b3
will do around 4 times more calculation than the gpu that has b0
, due to causal attention mask.
If we split the input sequence into 2n blocks, e.g. 4 gpu will split into:
b0,b7 | b1,b6 | b2,b5 | b3,b4
then all gpu will have the same amount of calculation, and theoratically the latency should be decrease by half.
Hey loving the work on ring flash attention,
I'm contacting you as our community cuda-mode is working on a cuda/pytorch version of ring attention, so feel free to join the discord if you'd like to collaborate or discuss stuff!
https://www.youtube.com/channel/UCJgIbYl6C5no72a0NUAPcTA
https://github.com/cuda-mode
https://discod.gg/cudamode
您好,我在使用EasyContext的zigzag_ring_flash_attn模式的时候报错如上
我的所有数据都被group by length到32768+1的长度上(根据https://github.com/jzhang38/EasyContext/issues/31#issue-2308064466)
在数据并行模式下可以正常运行,但序列并行报错。
code:
def main(args):
if args.output_dir:
os.makedirs(args.output_dir, exist_ok=True)
if args.wandb:
import wandb
wandb.login()
set_seed(args.seed)
timeout = InitProcessGroupKwargs(timeout=timedelta(seconds=1_000_000))
accelerator = Accelerator(
gradient_accumulation_steps=args.gradient_accumulate_every,
mixed_precision="bf16",
log_with="wandb" if args.wandb else None,
kwargs_handlers=[timeout],
# fsdp_plugin=fsdp_plugin,
)
accelerator.init_trackers(project_name=args.wandb, init_kwargs={"wandb":{"name":args.output_dir.split("/")[-1]}})
accelerator.print(f"Total GPUS: {accelerator.num_processes}")
model = AutoModelForCausalLM.from_pretrained(
args.model,
device_map=accelerator.device,
torch_dtype=torch.bfloat16,
rope_theta=args.rope_theta,
_attn_implementation="flash_attention_2",
)
# tokenizer = AutoTokenizer.from_pretrained(
# args.model,
# trust_remote_code=True,
# # llama不支持fast
# )
try:
train_dataset = load_dataset(args.dataset)
except:
train_dataset = load_from_disk(args.dataset)
if isinstance(train_dataset, DatasetDict):
train_dataset = train_dataset["train"]
# train_dataset = QwenSFTDataset(args.dataset, tokenizer, args)
assert isinstance(
model, (transformers.LlamaForCausalLM, transformers.MistralForCausalLM)
), "Only support llama and mistral model"
model_type = (
"llama" if isinstance(model, transformers.LlamaForCausalLM) else "mistral"
)
apply_seq_parallel_monkey_patch(args.parallel_mode, model_type)
if "input_ids" not in train_dataset.column_names:
raise RuntimeError("Dataset must include an `input_ids` feature")
# remove everything that is not input_ids
to_remove = [col for col in train_dataset.column_names if col != "input_ids"]
train_dataset = train_dataset.remove_columns(to_remove)
train_dataset = train_dataset.shuffle(seed=args.seed)
print("Dataset Size:", len(train_dataset))
train_loader = DataLoader(
train_dataset,
collate_fn=default_data_collator,
shuffle=True,
batch_size=args.batch_size,
)
if args.learning_rate != 2e-5:
accelerator.print(f"Warning: You also need to modify accelerate_configs/zero3_offload.json to change the learning rate")
optim = DummyOptim(model.parameters(), lr=args.learning_rate)
scheduler = DummyScheduler(
optim,
num_training_steps=args.max_train_steps,
total_num_steps=args.max_train_steps,
)
model, optim, scheduler = accelerator.prepare(model, optim, scheduler)
train_loader = prepare_dataloader(args.parallel_mode, train_loader, accelerator)
model.gradient_checkpointing_enable()
accelerator.register_for_checkpointing(scheduler)
accelerator.print(f"Max train steps: {args.max_train_steps}")
progress_bar = tqdm(
range(args.max_train_steps), disable=not accelerator.is_local_main_process
)
completed_steps = 0
model.train()
loss_func = CrossEntropyLoss(inplace_backward=True)
for step, batch in enumerate(train_loader):
input_ids = batch["input_ids"][..., : args.seq_length + 1][..., :-1]
target_ids = batch["input_ids"][..., : args.seq_length + 1][..., 1:]
position_ids = (
torch.arange(args.seq_length).unsqueeze(0).expand(input_ids.shape[0], -1)
)
# shard the input_ids according to the world size and rank according to zig zag attention
# print(input_ids.shape, position_ids.shape) # these values must be equal
prepared = prepare_seq_parallel_inputs(
args.parallel_mode,
input_ids,
position_ids,
target_ids,
accelerator.process_index,
accelerator.num_processes,
accelerator.device,
)
local_input_ids = prepared["local_input_ids"]
local_position_ids = prepared["local_position_ids"]
local_target_ids = prepared["local_target_ids"]
loss_log = None
with accelerator.accumulate(model):
logits = model(
local_input_ids,
position_ids=local_position_ids,
).logits
loss = loss_func(
logits.reshape(-1, logits.shape[-1]), local_target_ids.reshape(-1)
)
accelerator.backward(loss)
if accelerator.sync_gradients:
# pay attention here. When any seq parallel algo is turned on. This technically only log the very first chunk's loss
# and what is the first chunk really depends on how do you shard the sequence
# for zig zag attention, the first chunk contains the left most and rightmost tokens
# so you cannot compare the (logged) loss of dist attention and zigzag ring attention.
# loss_log = {"loss": loss.item(), "ppl": math.exp(loss.item())}
# we now try gathered loss to verify if ring attention and dist flash attention produce the same loss
# this may slow down the training
gathered_loss = accelerator.reduce(loss.clone().detach(), "mean")
loss_log = {
"loss": gathered_loss.item(),
"ppl": math.exp(gathered_loss.item()),
}
accelerator.log(loss_log, step=completed_steps)
optim.step()
scheduler.step()
optim.zero_grad()
if accelerator.sync_gradients:
progress_bar.update(1)
if loss_log is not None:
progress_bar.set_postfix(loss_log)
completed_steps += 1
if completed_steps >= args.max_train_steps:
break
accelerator.print(f"Training Finished")
accelerator.end_training()
if args.output_dir is not None:
accelerator.print(f"Saving model to {args.output_dir}")
accelerator.wait_for_everyone()
state_dict = accelerator.get_state_dict(model)
accelerator.unwrap_model(model).save_pretrained(
f"{args.output_dir}",
is_main_process=accelerator.is_main_process,
save_function=accelerator.save,
state_dict=state_dict,
)
accelerator.print(f"Saving Finished")
Normally we could use other long context methods like deepspeed ulysseus to avoid implementing this.
Hello, I notice there is a ring_flash_attn_func api but with no test example. I am trainning llama with flash_attn_func and with no idea how to use ring_flash_attn_func.
Were you able to find out the reason for the small numerical errors in backward pass with ring flash attention?
I found the errors increase as you increase the world size, so it does seem to be related to the fact that flash attention returns 16-bit tensors, and even though we accumulate in a 32-bit buffer it seems it is not enough.
Maybe it is an easy PR in flash attention to have them return raw fp32, or do the accumulation upstream?
请问最低的flash-attention版本是?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.