Giter VIP home page Giter VIP logo

llm-finetuning-toolkit's People

Contributors

akashsaravanan-georgian avatar balmasi avatar benjaminye avatar georgianpoole avatar mariia-georgian avatar rohitsaha avatar sinclairhudson avatar truskovskiyk avatar viveksingh-ctrl avatar viveksinghds avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

llm-finetuning-toolkit's Issues

[RichUI] Better Dataset Generation Display

Is your feature request related to a problem? Please describe.

  • Dataset creation table display always display all columns of dataset, instead of ones needed by prompt and prompt_stub
  • Dataset creation table display highlighting uses string matching, leading to weird outputs when there are overlaps

Describe the solution you'd like

  • Fix these issues!

[LoRA] Use Validation Set

If I have:

  • test_split: 0.1
  • train_split: 0.8

Maybe we can get calc_val_split=1-0.1-0.8=0.1 split as validation. Maybe also apply something like max(calc_val_split, 0.05) to prevent val split to be too big

quickstart basic - missing qa/llm_tests:?

Ran:
llmtune generate config
llmtune run ./config.yml

Things worked well (once I fixed my mistake with Mistral/huggingface repo permissions). The job ran very fast and put results into the "experiment" directory. But the experiment/XXX/results/ directory only has a "results.csv" file in it. I expected there to be results from the qa/llm_tests section in the config.yml file, which looks like this:
qa:
llm_tests:
- jaccard_similarity
- dot_product
- rouge_score
- word_overlap
- verb_percent
- adjective_percent
- noun_percent
- summary_length

Do I have to do something extra to get the qa to run?

[Workflow] Automatically run `black`, `flake8`, `isort` via Github Action

Is your feature request related to a problem? Please describe.

  • Automatic formatting and linting to improve code consistency

Describe the solution you'd like

  • Have pre-commit hooks that run before commits
  • Example

Describe alternatives you've considered

  • I've been manually running black from time to time on the whole repo, but not the best solution for collaboration

example config file to run inference only on fine-tuned model

Is it possible to provide a config file that shows how to run inference on an already fine-tuned model?

I have run the starter config, and it looks like the final PEFT model weights are in experiment/XXX/weights/.

So how do I re-run inference only (and possibly qa checks) on that model?

Quickstart Basic uses a very large model and is slow.

The basic "quickstart" example downloads Mistral-7B-Instruct-v0.2, which is ~15GB, taking me over 20 minutes to download. A smaller model should be used as a quickstart example.

To Reproduce
Steps to reproduce the behavior:

  1. Follow the "basic" level of quickstart

The basic version of the quickstart should be, in my opinion, a 10 minute (max) process and not require so much disk space.

Environment:

  • OS: Ubuntu 22.04
  • Packages Installed

Allow users to set verbosity of outputs

  • Right now debug outputs and warnings are suppressed in favor of a cleaner UI
  • Should leave users to choose a more verbose output by running something like
llmtune run --verbose
llmtune run -v

`pipx` installation doesn't work

Describe the bug
Having trouble to install with pipx

To Reproduce
Steps to reproduce the behavior:

  1. brew install pipx
  2. pipx install llm-toolkit

Expected behavior
Installs fine

Screenshots
Screenshot 2024-04-08 at 11 35 41โ€ฏAM

Environment:

  • OS: MacOS

Training of FlanT5 for summarization

I tried following the same framewrok for training the other llms (falcon,mistral,etc) with SFTTrainer to train the FlanT5 model as well.
But the results are bad, as if the llm doesn't learn anything.
Training it with the Seq2Seq method works. Why did you use this method for FlanT5 an SFTTrainer for all the other llms?

Trying to access gated repo error, Quickstart Basic

After installation, run:
llmtune generate config
==> works fine
llmtune run ./config.yml
==> get this error

OSError: You are trying to access a gated repo.
Make sure to request access at https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2 and pass a token having permission to this repo either by logging in with huggingface-cli login or by passing token=<your_token>.

So then I do:
huggingface-cli login
===> login successfully
llmtune run ./config.yml
==> get same error

Any ideas?

Allow custom train/test datasets

Is your feature request related to a problem? Please describe.
I'm working on a problem that requires me to split my data in a specific way (base on dates). Right now the config only allows for a single dataset to be provided and it internally does a train-test split based on the values provided for the test_size and train_size parameters.

Describe the solution you'd like
Ideally, an option to specify paths to both train and test data.

Describe alternatives you've considered
The alternative would be to add in support for other types of data splitting which I don't think makes sense for this repo to include.

Additional context
None

Inferencing script not executable due to package dependency errors

I tried to go through the README file as mentioned, and once i execute llama2_baseline_inference.py I am thrown with the error

ImportError: Using `load_in_8bit=True` requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes `pip install -i https://test.pypi.org/simple/ bitsandbytes` or pip install bitsandbytes`

even though the packages are installed in my environment. I was able to circumvent this problem by upgrading my datasets library using pip install -U datasets and now I received one other error as given below in this link

To avoid this issue, I downgraded my transformers library to 4.3 and currently, I am unable to download some of the checkpoints. I feel the packages need to be revamped with the latest versions

Add comment to indicate tf32 won't be available for older GPUs

Describe the bug
I'm trying to run this toolkit on colab notebook with T4 GPU and ran into errors. In order to get it working, I needed to turn bf16 and tf32 to false, and fp16 to true. There's already a note for the bf16 and fp16, maybe we can add a note for tf32 as well.

[CLI] Add `llmtune inference [experiment_dir]`

The command llmtune inference [experiment_dir] aims to provide a versatile interface for running inference on pre-trained language models, allowing users to:

  1. Load and run inference on a dataset; or
  2. Provide arbitrary text inputs for inference for spot checks; or
  3. Specify specific inputs to be injected in prompt template for inference

Proposed CLI

llmtune inference [experiment_dir] [options]

Arguments

experiment_dir: The experiment directory from finetuning experiments

Options

--dataset [dataset_path]: Path to a dataset (e.g., CSV, JSON, or Huggingface)
--text-input [text]: An arbitrary text input to run inference on. This option can be used for a single text input or for quick manual inference.
--column [name=value]: Allows specification of a column name and value for custom inputs. This option can be used multiple times to specify different column values.

Examples

Inference on a dataset:

llmtune inference ./my_experiment --dataset ./data/my_dataset.csv

Inference on arbitrary text:

llmtune inference ./my_experiment --text-input "This is an example text input for inference."

Inference with specific input values:

llmtune inference ./my_experiment --column column_1="foo" --column column_2="bar"

Related to: #160

CLI to Generate Example `config.yml`

  • For better usability, instead of having to copy config.yml out of the source repo. We can write a simple script to download the file and output to user's current working directory

question about fine tuning falcon

Hello
I ran the falcon classification task uaing the following command:
!python falcon_classification.py --lora_r 64 --epochs 1 --dropout 0.1 # finetune Falcon-7B on newsgroup classification dataset
Upon inspecting the model, I find that many of the layers are full rank and not the lower rank

from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("experiments/classification-sampleFraction-0.99_epochs-1_rank-64_dropout-0.1/assets")
 

Here is a screenshot showing this

Screenshot 2023-10-24 at 8 41 42 PM

Is this expected behavior ?

[Dataset] Dataset Generation Always Returns Cached Version

Describe the bug
At dataset creation, the dataset generated will always get the cached version despite change in file.

To Reproduce

  1. Run toolkit.py
  2. Ctrl-C
  3. Add a line in the dataset
  4. toolkit.py will not create a new dataset with desired changes

Expected behavior

  1. Dataset to be generated with new data

Environment:

  • OS: Ubuntu

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.