Giter VIP home page Giter VIP logo

olmo's Issues

Try torch 2.0

It's supposed to be faster, and at this point, it's the only way we can get the fast interconnect working in LUMI.

Mosaic tells us we need to apply this to composer to make it work: dskhudia/composer@4d55bdb

We should expect a speed-up too.

Integrate down-stream evaluation code

๐Ÿš€ The feature, motivation and pitch

No response

Alternatives

No response

Additional context

Make additional layer and adapter for running code with existing down-stream evaluation tools (e.g., HELM, Catwalk).

Add `generate` method to our model implementation

๐Ÿš€ The feature, motivation and pitch

We need a generate method to run evaluation. Minimum requirement:

  • top_k / top_p support
  • temperature control
  • return log probabilities.

Llama's generate method is fairly straightforward but we can't use it because of it has the wrong license.

Alternatives

  • Converting the model in HuggingFace and use their generate method. This will eventually need to be done, but, since we haven't fully finalized the architecture, this is too soon.
  • Finding a Apache 2.0 generation method.

Additional context

No response

Investigate RMSNorm as an alternative to LayerNorm

There's a PyTorch implementation here: https://github.com/bzhangGo/rmsnorm/blob/master/rmsnorm_torch.py

The problem we have with LayerNorm (or at least PyTorch's built-in implementation of LayerNorm) is that torch's autocast behavior is hardcoded to upcast inputs to the function torch.layer_norm, and as a result using low precision LayerNorm requires some hacking that fails to work when using torch.compile(). Low precision LN gives a huge speedup with non-compiled models, so we're optimistic that getting this to work with compiled models would provide a similar speedup.

Switching to a different norm implementation could sidestep this issue, and RMSNorm is a simpler operation so we might get an even bigger speedup.

Try adding intermediate layer losses

๐Ÿš€ The feature, motivation and pitch

E.g. we have an LM loss at 1/2, 1/4, 1/8 of the model.

To avoid the degenerate scenario where, for example, the first half of the model does well but the 2nd half just becomes a bunch of ~identity layers, we could have a separate LM head for each loss, i.e. at each layer in the model where we apply an LM loss. So we'd have to untie the embedding matrix from the LM head.

Alternatives

No response

Additional context

No response

Log into some online logging service

This is an experiment, but probably worthwhile. With 1000 nodes, it gets very difficult to see which node did what and when, especially when they fail, if they all log to stdout, which slurs stuffs into a single log file. Papertrail is basically running syslog-as-a-service. We can send log messages there, and then look and filter with their UI. We need to configure our code to write logs to their syslog service to do this.

Add (decoupled) LION optimizer

Mosaic has been preferring this over AdamW and it uses less memory since it only keeps track of momentum. We can find an implementation in this PR. Eventually this will be added to composer, but in the meantime we can just copy it over.

Sequential Olmo block should not have shared layernorm

๐Ÿ› Describe the bug

Ideally, the Layernorm layers for a sequential block are separate for attention and FFN modules. Good thing is we haven't seen too much difference because of this until now.

        q, k, v = self.att_proj(self.norm(x)).split(self.fused_dims, dim=-1)

        # Get attention scores.
        att, cache = self.attention(q, k, v, attention_bias, layer_past=layer_past, use_cache=use_cache)

        # Add attention scores.
        # shape: (B, T, C)
        x = x + self.dropout(att)

        # Add feed-forward projection.
        # shape: (batch_size, seq_len, d_model)
        x = x + self.dropout(self.ff_out(self.act(self.ff_proj(self.norm(x)))))

self.norm should be self.norm2 ideally in the last line

Versions

No response

Add eval loop to training script

This should be pretty easy: just add a configuration field for eval batch size and what not, then initialize an eval dataloader in the same way that we initialize the train data loader and pass it to the Trainer.

Does using `Dropout` layers, even if the probability is 0, have a performance penalty?

โ“ The question

Ideally there should be no performance penalty for dropout layers when the dropout probability is 0. But if there is, we should bypass dropout when p=0.0.

I'm currently testing this here: https://wandb.ai/ai2-llm/dropout-benchmarks

There will be 4 runs:

  1. 1.2b-bf16-no-dropout: uses a patched branch that bypasses all calls to dropout (except inside of the scaled_dot_product_attention function which we have no control over). This model is compiled using the default settings.
  2. 1.2b-bf16-zero-dropout: uses the usual implementation without any code changes were we still call the Dropout modules even though the dropout probability is set to 0. This model is compiled using the default settings.
  3. 1.2b-bf16-no-compile-no-dropout: same as 1.2b-bf16-no-dropout except this model is NOT compiled.
  4. 1.2b-bf16-no-compile-zero-dropout: same as 1.2b-bf16-zero-dropout except this model is NOT compiled.

Ensuring Data Order Tracking for Reproducibility

๐Ÿš€ The feature, motivation and pitch

Yesterday we spoke about where responsibility for data order lives between the llm-model and llm-data workstreams. I thought it might be good to start an issue here where we can figure out who and how we should ensure that future work can reproduce our exact pretraining data order.

Use cases:

  • It should be clear exactly what tokens a given checkpoint has trained on, so we can ask if a model has seen a specific thing and how long ago it has seen that thing.
  • It should be possible to recreate the same order of documents using a different tokenizer to allow other research to compare to ours.

Proposed features:

  • Track which document IDs are in each concatenated example and which examples are in each batch
  • Have dataloaders that queue up data across devices and nodes such that sequence of document IDs is always the same, even if the boundaries between batches change

Proposed method:

  • When using j * k dataloaders, dl_i , across j nodes and k devices, each data loader should sample from n documents of the pretraining corpus, doc_i, with a stride of j*k
  • For example if j=1 and k=2 , dl_0[0] == doc_0 and dl_1[0] == doc_1 while dl_0[1] == doc_2 and dl_1[1] == doc_3
  • Finally during training, the model will log the doc ids and associated spans in each concatenated example, and examples will be logged per batch. A rough example might be, {batch_0 : [ [ (doc_0, (start_tok_idx, end_tok_idx)), ...], [(doc_42, (start_tok_idx, end_tok_idx)),...]]}

Let me know what I can do to help support this and coordinate responsibility between llm-model and llm-data.

Alternatives

A simple though reduced alternative is to just record the tokens that are trained on without any further changes.

While this supports the ability to see what a given checkpoint has seen and when it has seen it, this may make it difficult to recover batch and document boundaries or to reproduce this data order in a different tokenizer.

Also if we donโ€™t ensure that data order is invariant to number of nodes and devices, it we may accidentally produce different training orders across different runs. Additionally it may be difficult for people using our codebase to reproduce our data order if they donโ€™t have the same number of nodes and devices.

Additional context

No response

Add a 7B config

We promised a 7B model. Also, we need an appropriately sized model to test multi-node training at smaller scales.

Multi-Query attention from PaLM

๐Ÿš€ The feature, motivation and pitch

Iz and I were talking about this today, can be under nice to have

This is from the PaLM paper:

"The standard Transformer formulation uses k attention heads, where the input vector for each timestep is linearly projected into โ€œqueryโ€, โ€œkeyโ€, and โ€œvalueโ€ tensors of shape [k, h], where h is the attention head size. Here, the key/value projections are shared for each head, i.e. โ€œkeyโ€ and โ€œvalueโ€ are projected to [1, h], but โ€œqueryโ€ is still projected to shape [k, h]. We have found that this has a neutral effect on model quality and training speed (Shazeer, 2019), but results in a significant cost savings at autoregressive decoding time. This is because standard multi-headed attention has low efficiency on accelerator hardware during auto-regressive decoding, because the key/value tensors are not shared between examples, and only a single token is decoded at a time."

Alternatives

No response

Additional context

No response

Collect 70B S2 tokens

Exact spec still WIP, but TODOs are basically:

  1. Athena query to get Titles & Abstracts from S2AG. Form JSON blob per document of the form:
{"text": "...", "paper_id": <identifier>}
  • To concatenate, " ".join([title, abstract]) should be sufficient.
  • Double-check whether structured abstracts preserve whitespace.
  1. Athena query to get S2ORC-OA papers. Form JSON blob per document of the form:
{"text": "...", "paper_id": <identifier>}
  • To concatenate, also whitespace join to linearize structured content.
  • Keep everything for now, including tables, bibliographies, etc.
  1. Build a blocklist of papers. For now, should just be a single file mapping paper_ids to some note or reason for removal. To start, this should be documents that are part of the test set for Catwalk evaluation, especially Pubmed/arXiv abstract generation.

Integrate latest throughput improvements from Mosaic

  • Abhi: "Using better FDSP configuration (BF16 all-gather, limiting all gathers, and a new non-reentrant version of activation checkpointing)."
    I think this means setting mixed_precision=PURE, limit_all_gathers=true, and activation_checkpointing_reentrant=false in the fsdp_config. Added in #61
  • Using Triton-based attention rather than HazyResearch Flash. We're sort of doing this already by using PyTorch's new built-in implementation.
  • Low precision LayerNorm. Implemented in #61
  • Use a fused version of CELoss. I think we're already doing this?
  • Increase microbatch size as high as possible.

Try bf16 on AMD hardware

Turns out we never tried it. If this fails, we need to talk to AMD ASAP. We cannot get to 70B without this.

Rename "DOLMA" to "OLMo"

We should either wait until #61 merges, or do this on as part of that PR.

We need to grep through and change all mentions of "DOLMA" / "Dolma" / "dolma" and also rename the Python module itself.

Basically:

# Rename Python module.
mv dolma olmo
# Find and replace all mentions using `fd` and `sd` (`brew install fd sd`)
fd -H --exclude .git | xargs sd 'DOLMA' 'OLMo'
fd -H --exclude .git | xargs sd 'Dolma' 'Olmo'
fd -H --exclude .git | xargs sd 'dolma' 'olmo'

Continue running the 7B

One experiment is, let's just keep running the 7B and see if it recovers from the spikes.

Look at data right where the spike happens

It is suspicious that we had two slightly different models (one with biases, one without), that both spiked at exactly the same moment. This suggests there might be a data issue.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.