Giter VIP home page Giter VIP logo

llm-basics's Introduction

LLM-Basics

Just documenting what I learn :)

Pre-training phase - Decoder from scratch (Andrej Karpathy)

  1. Implemented video code (Andrej Karpathy)

    video Link: https://www.youtube.com/watch?v=kCc8FmEb1nY

  2. Tried with Harry potter dataset (1 book)

  3. Tried with all 7 books- decent results

  4. increased batch-size (context length) and used frequency encoding (character based) - better loss (1.8)

  5. Increased epochs - better results (final loss - 1.4 )

  6. Tried bert based encoding and decoding

    After 1 epoch loss = 10.47

    After 100 epochs loss = 6.31

    After 300 epochs loss = 5.59

colab link: https://colab.research.google.com/drive/1w2xrCzgQ7PejGULuiaOY_mQfZKdtH8UW

Important techniques

  • Torch LR finder: the learning rate is increased after each processed batch and the corresponding loss is logged. The result of this is a lr vs. loss plot that can be used as guidance for choosing a optimal initial lr.
  • One cycle policy: start with a lower learning rate, gradually increase it to a maximum value (the peak of the cycle), and then gradually decrease it again. Additionally, during the cycle, the momentum is typically increased and then decreased. The intuition is that starting with a lower learning rate helps the model converge, and using a higher learning rate during the middle of training allows the model to escape sharp minima and explore the loss landscape more efficiently. Towards the end of training, reducing the learning rate helps the model converge to a more refined solution.

Supervised Fine-tuning phase - Phi2 with QLoRA

  • LoRA: (Low-Rank Adaptation of Large Language Models) is a popular and lightweight training technique that significantly reduces the number of trainable parameters. It works by inserting a smaller number of new weights into the model and only these are trained.
  • QLoRA: Employs the following key techniques: 4-bit quantization of the full pretrained language model to compress weights and reduce memory requirements using a novel NormalFloat encoding optimized for the distribution of neural network weights.
  • Dataset: openassistant-guanaco
  • Colab Link: https://drive.google.com/file/d/1rOf5DO0L4WYm1VNzp04gB_0MjOF0pLoG/view?usp=sharing

Yet to fix a few things. Will update soon :)

Falcon with QLoRA

Fine tuning Falcon-7B with QLoRA with an additional SFTTrainer. Ran out of disk space before it could complete one epoch. Attaching the loss curve below.

image

llm-basics's People

Contributors

charvibannur avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.