Giter VIP home page Giter VIP logo

ring-flash-attention's Introduction

Ring Flash Attention

This repo implements the RingAttention with FlashAttention. Currently, this repo implements:

  • ring_flash_attn_func: ring attention version of flash_attn_func
  • ring_flash_attn_varlen_func: ring attention version of flash_attn_varlen_func
  • zigzag_ring_flash_attn_func: an optimized version of ring_flash_attn_func, see issue#2
  • zigzag_ring_flash_attn_varlen_func: an optimized version of ring_flash_attn_varlen_func
  • stripe_flash_attn_func: stripe attention version of ring_flash_attn_func, the block size is set to 1 to use flash_attn api.

Note that

  • all function has the *_func, *_kvpacked_func, *_qkvpacked_func variant implemented.
  • the varlen versions only support passing one cu_seqlens.

The main idea is to use the softmax_lse output from the flash attention kernels.

The current performance on 8xH800 is (benchmark/benchmark_qkvpacked_func.py):

GPU theoretic flash_attn ring_attn zigzag_ring stripe_attn
fwd only (iter/sec) 8xH800 2418.4 / 8 = 302.3 208.0 283.0 259.6
68.8% 93.6% 85.9%
fwd + bwd (iter/sec) 8xH800 705.2 / 8 = 88.2 54.3 75.7 76.9
61.5% 85.9% 87.2%
fwd only (iter/sec) 8xA100 1545.9 / 8 = 193.2 124.4 179.0 163.9
64.3% 92.7% 84.8%
fwd + bwd (iter/sec) 8xA100 470.6 / 8 = 58.8 33.3 49.5 45.9
56.6% 84.1% 78.1%

Note that

  • when running the benchmark with with 8 gpu, the flash attn code is running with 1/8 computation of ring attention.
  • nvlink between GPUs are required for high performance.
  • the varlen versions are slow at the moment, please use the non-varlen version if possible.

Limits

There are some arithmetic errors with the current implementation. The reason for them is probably that flash attention will return bf16 value for each block, so we cannot accumluate the values with the original fp32 ones.

And also because we need to save extra fp32 buffer during computation, the memory usage would be higher than theoretic limit.

TODOs

  • Implement ring_flash_attn_varlen_qkvpacked_func
  • Implement zigzag_ring_flash_attn_qkvpacked_func issue#2
  • Implement stripe_flash_attn_qkvpacked_func
  • Implement zigzag_ring_flash_attn_varlen_qkvpacked_func
  • Implement *_kvpacked_func and *_func variant for all APIs
  • Optimize *_varlen_func
  • Try to upstream to flash attention.

Test

torchrun --nproc_per_node 8 test/test_ring_flash_attn_func.py
torchrun --nproc_per_node 8 test/test_ring_flash_attn_varlen_func.py
torchrun --nproc_per_node 8 test/test_zigzag_ring_flash_attn_func.py
torchrun --nproc_per_node 8 test/test_zigzag_ring_flash_attn_varlen_func.py
torchrun --nproc_per_node 8 test/test_stripe_flash_attn_func.py

Benchmark

torchrun --nproc_per_node 8 benchmark/benchmark_qkvpacked_func.py
torchrun --nproc_per_node 8 benchmark/benchmark_varlen_qkvpacked_func.py

Known Limits

  • dropout is not supported at the moment, because it's hard to save all the rng_states.
  • window_size is not supported, because it will be really tricky to implement a varlen version with window_size.

ring-flash-attention's People

Contributors

reyoung avatar yuxin-cv avatar zhuzilin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ring-flash-attention's Issues

Thanks for your great work and here is my test results!

I ran the test script on my single node machine, where I saw the memory cost of each attention:

    benchmark_forward(flash_attn_qkvpacked_func) # 2k. gpu memory
    benchmark_forward(ring_flash_attn_qkvpacked_func) # 4k8. gpu memory
    benchmark_forward(ring_flash_attn_qkvpacked_func_v2) # 14k. gpu memory
    benchmark_forward(zigzag_ring_flash_attn_qkvpacked_func) # 5k. gpu memory

And the running time of each attention:

flash_attn_qkvpacked_func 4470.3194011016185 iter/s, 0.22369766235351562 sec
ring_flash_attn_qkvpacked_func 327.3293995259765 iter/s, 3.0550265312194824 sec
ring_flash_attn_qkvpacked_func_v2 310.57613547187344 iter/s, 3.219822406768799 sec
zigzag_ring_flash_attn_qkvpacked_func 340.40177158407965 iter/s, 2.9377050399780273 sec

Is there something wrong?

ring attention实现原理

ring attention本质是flash attention的分布式版本,flash attentionV2里面会维护softmax分母但是在更新out的时候好像只会更新最大值不会更新分母用于减少计算吧?在Q和一圈KV算完了以后最后除以一个softmax的global分母就可以了,所以作者这个ring attention实现的分布式FA可以理解成是v1版本的FA吗?

Question about updating lse

Thank you for the excellent work.

Could you explain line 19 in the function?

I think it shoud be new_lse = lse + block_lse
Is there any analytic or numerical reason?

@torch.jit.script
def _update_out_and_lse(
out: torch.Tensor,
lse: torch.Tensor,
block_out: torch.Tensor,
block_lse: torch.Tensor,
) -> Tuple[torch.Tensor, torch.Tensor]:
block_out = block_out.to(torch.float32)
block_lse = block_lse.transpose(-2, -1).unsqueeze(dim=-1)
new_lse = lse + torch.log(1 + torch.exp(block_lse - lse))
out = torch.exp(lse - new_lse) * out + torch.exp(block_lse - new_lse) * block_out
lse = new_lse
return out, lse

large memory usage

image
Thanks for sharing this excellent implementation of ring attention.
Here are my test results on 2*A100 (with nvlink). Judging from the results, the memory usage of ring attention(ring_flash_attn_qkvpacked_func) seems to be very large. This is not as expected. Are there any possible problems?

Numerical errors in backward

Were you able to find out the reason for the small numerical errors in backward pass with ring flash attention?

I found the errors increase as you increase the world size, so it does seem to be related to the fact that flash attention returns 16-bit tensors, and even though we accumulate in a 32-bit buffer it seems it is not enough.

Maybe it is an easy PR in flash attention to have them return raw fp32, or do the accumulation upstream?

ring flash attention with BPT

Hi~ @zhuzilin
我正在尝试将BPT 接入ring flash attention,使用chunk_size切分qkv,在local进行更小chunk的attention计算。
参照ring_flash_attn.py的forward和backward,实现了 blockwise_flash_attn_forwardblockwise_flash_attn_backward,目前forward精度可以对齐,backward存在误差。我想问一下,backward的实现可能存在哪些问题?
下面是我的实现:

def blockwise_flash_attn_forward(
    q: torch.Tensor,
    k: torch.Tensor,
    v: torch.Tensor,
    q_chunk_size: int,
    k_chunk_size: int,
    softmax_scale,
    dropout_p=0,
    causal=True,
    return_softmax=True
):
    assert q.shape[1] % q_chunk_size == 0
    assert k.shape[1] % k_chunk_size == 0
    
    num_q_chunk = q.shape[1] // q_chunk_size
    num_k_chunk = k.shape[1] // k_chunk_size
    batch,seqlen,head_dim,num_head = q.shape
    
    block_out = torch.empty(q.shape, dtype=torch.float32, device=q.device)
    block_lse = torch.empty((batch,seqlen,head_dim,1), dtype=torch.float32, device=q.device)

    for i in range(num_q_chunk):
        q_i = q[:,i * q_chunk_size: (i + 1) * q_chunk_size]
        out_i = None
        lse_i = None
        
        for j in range(num_k_chunk-1,-1,-1):
            if j > i and causal:
                continue
            
            k_j = k[:,j * k_chunk_size: (j + 1) * k_chunk_size]
            v_j = v[:,j * k_chunk_size: (j + 1) * k_chunk_size]
            
            out_ij, _, _, _, _, lse_ij, _, _ = _flash_attn_forward(
                q_i,
                k_j,
                v_j,
                dropout_p,
                softmax_scale,
                causal=causal and j == i,
                return_softmax=return_softmax and dropout_p > 0
            )
            out_i, lse_i = update_out_and_lse(out_i, lse_i, out_ij, lse_ij)

        block_out[:, i * q_chunk_size: (i + 1) * q_chunk_size] = out_i
        block_lse[:, i * q_chunk_size: (i + 1) * q_chunk_size] = lse_i
        
    return block_out, block_lse.squeeze(dim=-1).transpose(-1,-2)


def blockwise_flash_attn_backward(
    dout,
    q,
    k,
    v,
    out,
    q_chunk_size,
    k_chunk_size,
    softmax_lse,
    dq,
    dk,
    dv,
    softmax_scale,
    dropout_p,
    causal=True,
    rng_state=None
):
    assert q.shape[1] % q_chunk_size == 0
    assert k.shape[1] % k_chunk_size == 0

    num_q_chunk = q.shape[1] // q_chunk_size
    num_k_chunk = k.shape[1] // k_chunk_size

    temp_dq_buffer = torch.empty(q[:,:q_chunk_size].shape, dtype=q.dtype, device=q.device)
    temp_dk_buffer = torch.empty(k[:,:k_chunk_size].shape, dtype=k.dtype, device=k.device)
    temp_dv_buffer = torch.empty(v[:,:k_chunk_size].shape, dtype=v.dtype, device=v.device)
    
    
    for i in range(num_q_chunk):
        q_i = q[:,i * q_chunk_size: (i + 1) * q_chunk_size]
        dout_i = dout[:,i * q_chunk_size: (i + 1) * q_chunk_size]
        out_i = out[:,i * q_chunk_size: (i + 1) * q_chunk_size]
        softmax_lse_i = softmax_lse[:,:,i * q_chunk_size: (i + 1) * q_chunk_size]
        q_i = q_i.contiguous()
        dout_i = dout_i.contiguous()
        out_i = out_i.contiguous()
        softmax_lse_i = softmax_lse_i.contiguous()

        for j in range(num_k_chunk):
            k_j = k[:,j * k_chunk_size: (j + 1) * k_chunk_size]
            v_j = v[:,j * k_chunk_size: (j + 1) * k_chunk_size]
            k_j = k_j.contiguous()
            v_j = v_j.contiguous()

            if j > i and causal:
                continue

            _flash_attn_backward(
                dout_i,
                q_i,
                k_j,
                v_j,
                out_i,
                softmax_lse_i,
                temp_dq_buffer,
                temp_dk_buffer,
                temp_dv_buffer,
                dropout_p,
                softmax_scale,
                causal = causal and j == i,
                rng_state=rng_state,
            )
            
            # update dq dk dv
            dq[:,i * q_chunk_size: (i + 1) * q_chunk_size] += temp_dq_buffer
            dk[:,j * k_chunk_size: (j + 1) * k_chunk_size] += temp_dk_buffer
            dv[:,j * k_chunk_size: (j + 1) * k_chunk_size] += temp_dv_buffer

分别替换ring_flash_attn_forward 中的_flash_attn_forward,和ring_flash_attn_backward中的_flash_attn_backward

下面是我的测试结果:

##############################
# forward:
##############################
out: max 2.896484375, mean 0.0203094482421875
lse: max 10.417832374572754, mean 9.204237937927246
out diff:
[0] max 0.00048828125, mean 8.881092071533203e-06
[1] max 0.0001220703125, mean 7.450580596923828e-06
[2] max 0.0001220703125, mean 5.9604644775390625e-06
[3] max 6.103515625e-05, mean 5.066394805908203e-06
[4] max 6.103515625e-05, mean 4.5299530029296875e-06
[5] max 6.103515625e-05, mean 4.112720489501953e-06
[6] max 6.103515625e-05, mean 3.814697265625e-06
[7] max 6.103515625e-05, mean 3.516674041748047e-06
lse diff:
[0] max 9.5367431640625e-07, mean 1.645181413323371e-07
[1] max 9.5367431640625e-07, mean 2.641230878452916e-07
[2] max 1.9073486328125e-06, mean 3.0044466825529526e-07
[3] max 1.9073486328125e-06, mean 3.3890827921823075e-07
[4] max 1.9073486328125e-06, mean 3.8137659430503845e-07
[5] max 1.9073486328125e-06, mean 4.0913002408160537e-07
[6] max 1.9073486328125e-06, mean 4.272908142866072e-07
[7] max 1.9073486328125e-06, mean 4.6798959374427795e-07
##############################
# backward:
##############################
load_dq:
[0] max 2.783203125, mean 0.052520751953125
[1] max 0.3310546875, mean 0.02398681640625
[2] max 0.2083740234375, mean 0.0184478759765625
[3] max 0.1162109375, mean 0.0155792236328125
[4] max 0.13330078125, mean 0.01374053955078125
[5] max 0.1204833984375, mean 0.01241302490234375
[6] max 0.11260986328125, mean 0.0114288330078125
[7] max 0.0775146484375, mean 0.01064300537109375
dq diff:
[0] max 0.005859375, mean 7.49826431274414e-05
[1] max 0.186279296875, mean 0.01239776611328125
[2] max 0.1973876953125, mean 0.01953125
[3] max 0.235107421875, mean 0.0253143310546875
[4] max 0.30615234375, mean 0.0301361083984375
[5] max 0.52392578125, mean 0.03436279296875
[6] max 0.56689453125, mean 0.038177490234375
[7] max 0.3955078125, mean 0.041748046875
load_dk:
[0] max 2.654296875, mean 0.05340576171875
[1] max 0.256591796875, mean 0.021697998046875
[2] max 0.169921875, mean 0.01535797119140625
[3] max 0.13330078125, mean 0.0116729736328125
[4] max 0.09124755859375, mean 0.0090484619140625
[5] max 0.1158447265625, mean 0.006908416748046875
[6] max 0.050384521484375, mean 0.00492095947265625
[7] max 0.03936767578125, mean 0.002498626708984375
dk diff:
[0] max 0.253173828125, mean 0.03192138671875
[1] max 0.16845703125, mean 0.0232696533203125
[2] max 0.130126953125, mean 0.017364501953125
[3] max 0.1097412109375, mean 0.012786865234375
[4] max 0.10797119140625, mean 0.00893402099609375
[5] max 0.049530029296875, mean 0.005580902099609375
[6] max 0.039337158203125, mean 0.002498626708984375
[7] max 1.52587890625e-05, mean 3.5762786865234375e-07
load_dv:
[0] max 5.89453125, mean 0.05450439453125
[1] max 0.1951904296875, mean 0.021484375
[2] max 0.11883544921875, mean 0.01525115966796875
[3] max 0.10003662109375, mean 0.01158905029296875
[4] max 0.07550048828125, mean 0.00901031494140625
[5] max 0.06658935546875, mean 0.006816864013671875
[6] max 0.041015625, mean 0.00492095947265625
[7] max 0.041961669921875, mean 0.002475738525390625
dv diff:
[0] max 0.3232421875, mean 0.042572021484375
[1] max 0.21240234375, mean 0.03094482421875
[2] max 0.1527099609375, mean 0.0223236083984375
[3] max 0.1075439453125, mean 0.015625
[4] max 0.08245849609375, mean 0.010223388671875
[5] max 0.0447998046875, mean 0.005950927734375
[6] max 0.0419921875, mean 0.002475738525390625
[7] max 3.0517578125e-05, mean 3.5762786865234375e-07

test on 8*A800

I tested on the 8*A800 machine, and below are my test results.

flash_attn_qkvpacked_func : 464.177 iter/s, 0.215 sec
ring_flash_attn_qkvpacked_func: 28.712 iter/s, 3.483 sec
zigzag_ring_flash_attn_qkvpacked_func: 36.878 iter/s, 2.712 sec
stripe_flash_attn_qkvpacked_func: 37.264 iter/s, 2.684 sec

Compared to the A100 performance in the readme, zigzag only achieves 63.5% of the original flash_attn performance. Is this result reasonable?

[Feature Request] Balancing computation with zigzag blocking

Currently the implementation will split the input sequence into n blocks, e.g. 4 gpu will split into:

b0 | b1 | b2 | b3

however, this will result in uneven calculation, where the gpu that has b3 will do around 4 times more calculation than the gpu that has b0, due to causal attention mask.

If we split the input sequence into 2n blocks, e.g. 4 gpu will split into:

b0,b7 | b1,b6 | b2,b5 | b3,b4

then all gpu will have the same amount of calculation, and theoratically the latency should be decrease by half.

4卡 A100 测试 ring attention 性能不太行呢

torchrun --nproc_per_node 4 benchmark/benchmark_varlen_qkvpacked_func.py
# flash_attn_varlen_qkvpacked_func
618.3720160243046 iter/s, 0.16171495056152344 sec
# ring_flash_attn_varlen_qkvpacked_func
80.5184792465389 iter/s, 1.241950927734375 sec
# zigzag_ring_flash_attn_varlen_qkvpacked_func
63.748601413892935 iter/s, 1.568661865234375 sec
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.182.03   Driver Version: 470.182.03   CUDA Version: 12.3     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA A100-SXM...  On   | 00000000:03:00.0 Off |                    0 |
| N/A   18C    P0    49W / 400W |      3MiB / 40536MiB |      0%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA A100-SXM...  On   | 00000000:05:00.0 Off |                    0 |
| N/A   17C    P0    49W / 400W |      3MiB / 40536MiB |      0%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+
|   2  NVIDIA A100-SXM...  On   | 00000000:15:00.0 Off |                    0 |
| N/A   15C    P0    49W / 400W |      3MiB / 40536MiB |      0%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+
|   3  NVIDIA A100-SXM...  On   | 00000000:1E:00.0 Off |                    0 |
| N/A   17C    P0    48W / 400W |      3MiB / 40536MiB |      0%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+

多卡qkv维度问题

您好,多卡多batch微调qwen模型时,将sequence切分到不同卡上,由于存在attention mask,最后一张卡上的qkv的维度会和其它卡上不对应,最后导致nccl通信超时,针对这种情况应该怎么修改?

ring attention with varlen

Hi, Zilin, I'm curious if there is a design document or a conceptual design available for the solution that supports variable length with ring attention?

Does ring-attn not support dropout?

In the backward function of ring-attn, rng_state does not use the value from forward function, but directly passes in None.
Does this indicate that ring-attn does not support dropout?

精度问题

There are some arithmetic errors with the current implementation. The reason for them is probably that flash attention will return bf16 value for each block, so we cannot accumluate the values with the original fp32 ones.

如果使用bf16精度,不是fp32精度,就不存在accumluate the values with the original fp32 ones. ?

关于tp和分块操作最终聚合的问题

作者您好,我尝试将您的代码集成到Megatron-LM上,使用的时候设置了tensor parallelism,同时代码中还对q,k,v进行了分块(local_q),请问最终应该怎么聚合呢

Bugs when using zigzag_ring_flash_attn: RuntimeError: Number of requests do not match number of collectives

image

您好,我在使用EasyContext的zigzag_ring_flash_attn模式的时候报错如上
我的所有数据都被group by length到32768+1的长度上(根据https://github.com/jzhang38/EasyContext/issues/31#issue-2308064466)

在数据并行模式下可以正常运行,但序列并行报错。

code:

def main(args):
    if args.output_dir:
        os.makedirs(args.output_dir, exist_ok=True)
    if args.wandb:
        import wandb

        wandb.login()
    set_seed(args.seed)

    timeout = InitProcessGroupKwargs(timeout=timedelta(seconds=1_000_000))

    accelerator = Accelerator(
        gradient_accumulation_steps=args.gradient_accumulate_every,
        mixed_precision="bf16",
        log_with="wandb" if args.wandb else None,
        kwargs_handlers=[timeout],
        # fsdp_plugin=fsdp_plugin,
    )
    accelerator.init_trackers(project_name=args.wandb, init_kwargs={"wandb":{"name":args.output_dir.split("/")[-1]}})
    accelerator.print(f"Total GPUS: {accelerator.num_processes}")
    
    model = AutoModelForCausalLM.from_pretrained(
        args.model,
        device_map=accelerator.device,
        torch_dtype=torch.bfloat16,
        rope_theta=args.rope_theta,
        _attn_implementation="flash_attention_2",
    )
    
#     tokenizer = AutoTokenizer.from_pretrained(
#         args.model,
#         trust_remote_code=True,
#         # llama不支持fast
#     )
    try:
        train_dataset = load_dataset(args.dataset)
    except:
        train_dataset = load_from_disk(args.dataset)
    if isinstance(train_dataset, DatasetDict):
        train_dataset = train_dataset["train"]
#     train_dataset = QwenSFTDataset(args.dataset, tokenizer, args)

    assert isinstance(
        model, (transformers.LlamaForCausalLM, transformers.MistralForCausalLM)
    ), "Only support llama and mistral model"
    model_type = (
        "llama" if isinstance(model, transformers.LlamaForCausalLM) else "mistral"
    )
    apply_seq_parallel_monkey_patch(args.parallel_mode, model_type)

    if "input_ids" not in train_dataset.column_names:
        raise RuntimeError("Dataset must include an `input_ids` feature")
    # remove everything that is not input_ids
    to_remove = [col for col in train_dataset.column_names if col != "input_ids"]
    train_dataset = train_dataset.remove_columns(to_remove)
    train_dataset = train_dataset.shuffle(seed=args.seed)
    print("Dataset Size:", len(train_dataset))
    train_loader = DataLoader(
        train_dataset,
        collate_fn=default_data_collator,
        shuffle=True,
        batch_size=args.batch_size,
    )
    if args.learning_rate != 2e-5:
        accelerator.print(f"Warning: You also need to modify accelerate_configs/zero3_offload.json to change the learning rate")
    optim = DummyOptim(model.parameters(), lr=args.learning_rate)
    scheduler = DummyScheduler(
        optim,
        num_training_steps=args.max_train_steps,
        total_num_steps=args.max_train_steps,
    )
    model, optim, scheduler = accelerator.prepare(model, optim, scheduler)
    train_loader = prepare_dataloader(args.parallel_mode, train_loader, accelerator)
    model.gradient_checkpointing_enable()

    accelerator.register_for_checkpointing(scheduler)

    accelerator.print(f"Max train steps: {args.max_train_steps}")
    progress_bar = tqdm(
        range(args.max_train_steps), disable=not accelerator.is_local_main_process
    )
    completed_steps = 0

    model.train()
    loss_func = CrossEntropyLoss(inplace_backward=True)
    for step, batch in enumerate(train_loader):
        input_ids = batch["input_ids"][..., : args.seq_length + 1][..., :-1]
        target_ids = batch["input_ids"][..., : args.seq_length + 1][..., 1:]
        position_ids = (
            torch.arange(args.seq_length).unsqueeze(0).expand(input_ids.shape[0], -1)
        )
        # shard the input_ids according to the world size and rank according to zig zag attention
        # print(input_ids.shape, position_ids.shape) # these values must be equal
        
        prepared = prepare_seq_parallel_inputs(
            args.parallel_mode,
            input_ids,
            position_ids,
            target_ids,
            accelerator.process_index,
            accelerator.num_processes,
            accelerator.device,
        )
        local_input_ids = prepared["local_input_ids"]
        local_position_ids = prepared["local_position_ids"]
        local_target_ids = prepared["local_target_ids"]

        loss_log = None
        with accelerator.accumulate(model):
            logits = model(
                local_input_ids,
                position_ids=local_position_ids,
            ).logits
            loss = loss_func(
                logits.reshape(-1, logits.shape[-1]), local_target_ids.reshape(-1)
            )
            accelerator.backward(loss)

            if accelerator.sync_gradients:
                # pay attention here. When any seq parallel algo is turned on. This technically only log the very first chunk's loss
                # and what is the first chunk really depends on how do you shard the sequence
                # for zig zag attention, the first chunk contains the left most and rightmost tokens
                # so you cannot compare the (logged) loss of dist attention and zigzag ring attention.
                # loss_log = {"loss": loss.item(), "ppl": math.exp(loss.item())}

                # we now try gathered loss to verify if ring attention and dist flash attention produce the same loss
                # this may slow down the training
                gathered_loss = accelerator.reduce(loss.clone().detach(), "mean")
                loss_log = {
                    "loss": gathered_loss.item(),
                    "ppl": math.exp(gathered_loss.item()),
                }
                accelerator.log(loss_log, step=completed_steps)

            optim.step()
            scheduler.step()
            optim.zero_grad()

        if accelerator.sync_gradients:
            progress_bar.update(1)
            if loss_log is not None:
                progress_bar.set_postfix(loss_log)
            completed_steps += 1

        if completed_steps >= args.max_train_steps:
            break

    accelerator.print(f"Training Finished")
    accelerator.end_training()

    if args.output_dir is not None:
        accelerator.print(f"Saving model to {args.output_dir}")

        accelerator.wait_for_everyone()

        state_dict = accelerator.get_state_dict(model)

        accelerator.unwrap_model(model).save_pretrained(
            f"{args.output_dir}",
            is_main_process=accelerator.is_main_process,
            save_function=accelerator.save,
            state_dict=state_dict,
        )

        accelerator.print(f"Saving Finished")

多机训练速度问题

我尝试在多机多卡上训练,发现耗时相比单机上要增加很多,想同训练环境下相比Deepspeed Ulysses耗时增加了三倍,而单机上却没有这个问题,请问是什么原因导致的呢?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.