Giter VIP home page Giter VIP logo

qlora-chinese-llm's Introduction

qlora-chinese-example

使用qlora对中文大语言模型进行微调。

使用qlora对baichuan-7b进行微调,代码更加简洁:https://github.com/taishan1994/baichuan-Qlora-Tuning

依赖

numpy==1.24.2
pandas==2.0.0
nltk==3.7
transformers==4.30.0.dev0
accelerate==0.20.0.dev0
deepspeed==0.9.2
peft==0.4.0.dev0
datasets==2.12.0
evaluate==0.2.2
sentencepiece==0.1.97
scipy==1.10.1
icetk
cpm_kernels
mpi4py==3.1.4

目录结构

--output #训练保存lora权重
----chatglm
----alpaca
----bloom
--data
----msra
------instruct_data
--------train.json  #指令数据
--model_hub
----BELLE-7B-2M #bloom权重
----chatglm-6b  #chatGLM权重
----7B#英文LLaMA原始权重
----7B-hf#英文权重转换为hugging face格式权重
----chinese-llama-plus-lora-7b#中文llama-7b的lora权重
----chinese-alpaca-plus-lora-7b#中文alpaca-7b的lora权重
----chinese-alpaca-7b#合并lora后的最终的模型
----tokenizer.model#原始llama的7B文件
----convert_llama_weights_to_hf.py  #llama转换为hugging face格式
----merge_llama_with_chinese_lora.py  #合并lora到预训练模型
--tools
----get_version.py  #获取python包版本
----get_used_gpus.py  #循环打印使用的GPU显卡
--chat.py  # 闲聊
--qlora.py  # 4bit训练
--process.py  # 测试处理数据

ChatGLM

ChatGLM-6B下载地址:清华大学云盘 (tsinghua.edu.cn)

python qlora.py --model_name="chatglm" --model_name_or_path="./model_hub/chatglm-6b" --trust_remote_code=True --dataset="msra" --source_max_len=128 --target_max_len=64 --do_train --save_total_limit=1 --padding_side="left" --per_device_train_batch_size=8 --do_eval --bits=4 --save_steps=10 --gradient_accumulation_steps=1 --learning_rate=1e-5 --output_dir="./output/chatglm/" --lora_r=8 --lora_alpha=32

Alpaca

Facebook官方发布的LLaMA模型禁止商用,并且官方没有正式开源模型权重(虽然网上已经有很多第三方的下载地址)。为了遵循相应的许可,目前暂时无法发布完整的模型权重,敬请各位理解(目前国外也是一样)。自行搜索下载地址。

转换步骤

  • 1、下载好7B、llama-loraalpaca-lora到model_hub下。进入到model_hub目录下。
  • 2、将llama转换为hugging face支持的格式:python convert_llama_weights_to_hf.py --input_dir ./ --model_size 7B --output_dir ./7B-hf。如果报错:If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0则可以pip install --upgrade protobuf==3.20.1,然后:python convert_llama_weights_to_hf.py --input_dir ./ --model_size tokenizer_only --output_dir ./7B-hf。最终我们可以得到7B-hf。
  • 3、合并lora到llama上:python merge_llama_with_chinese_lora.py --base_model "./7B-hf" --lora_model "./chinese-llama-plus-lora-7b,chinese-alpaca-plus-lora-7b" --output_type "huggingface" --output_dir "./chinese-alpaca-7b" 。最终我们可以得到chinese-alpaca-7b。
python qlora.py --model_name="chinese_alpaca" --model_name_or_path="./model_hub/chinese-alpaca-7b" --trust_remote_code=False --dataset="msra" --source_max_len=128 --target_max_len=64 --do_train --save_total_limit=1 --padding_side="right" --per_device_train_batch_size=8 --do_eval --bits=4 --save_steps=10 --gradient_accumulation_steps=1 --learning_rate=1e-5 --output_dir="./output/alpaca/" --lora_r=8 --lora_alpha=32

BLOOM

BELLE-7B-2M下载地址:BelleGroup/BELLE-7B-2M at main (huggingface.co)

python qlora.py --model_name="chinese_bloom" --model_name_or_path="./model_hub/BELLE-7B-2M" --trust_remote_code=False --dataset="msra" --source_max_len=128 --target_max_len=64 --do_train --save_total_limit=1 --padding_side="left" --per_device_train_batch_size=8 --do_eval --bits=4 --save_steps=10 --gradient_accumulation_steps=1 --learning_rate=1e-5 --output_dir="./output/bloom/" --lora_r=8 --lora_alpha=32

闲聊

python chat.py --model_name "chatglm" --base_model "./model_hub/chatglm-6b" --tokenizer_path "./model_hub/chatglm-6b" --lora_model "./output/chatglm/adapter_model" --with_prompt --interactive

补充

  • 怎么训练自己的数据? 数据格式为:

     {
         "data": [
             {"instruction":"", "input":"", "output":""},
             {"instruction":"", "input":"", "output":""},
             ...
         ]
     }

    然后在qlora.py里面定义数据的地方加上自己的数据集即可。最后运行指令的时候自己定义相关的参数。

参考

liucongg/ChatGLM-Finetuning: 基于ChatGLM-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning等 (github.com)

THUDM/ChatGLM-6B: ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型 (github.com)

huggingface/peft: 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning. (github.com)

ymcui/Chinese-LLaMA-Alpaca: 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs) (github.com)

LianjiaTech/BELLE: BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型) (github.com)

artidoro/qlora: QLoRA: Efficient Finetuning of Quantized LLMs (github.com)

哪位好心人士卡多的试试看最终效果吧,租AutoDL入不敷出呀

qlora-chinese-llm's People

Contributors

taishan1994 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

qlora-chinese-llm's Issues

RuntimeError: CUDA error: device-side assert triggered

RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

请问我遇到了这个问题,如何解决呢?

RuntimeError: mat1 and mat2 shapes cannot be multiplied

尝试在12G卡上训练 python qlora.py --model_name="chinese_alpaca" --model_name_or_path="./model_hub/chinese-alpaca-7b" --trust_remote_code=False --dataset="msra" --source_max_len=128 --target_max_len=64 --do_train --save_total_limit=1 --padding_side="right" --per_device_train_batch_size=8 --do_eval --bits=4 --save_steps=10 --gradient_accumulation_steps=1 --learning_rate=1e-5 --output_dir="./output/alpaca/" --lora_r=8 --lora_alpha=32
出错:
File "/mnt/data1ts/llm/training/qlora-chinese-LLM/qlora.py", line 1012, in
train()
File "/mnt/data1ts/llm/training/qlora-chinese-LLM/qlora.py", line 973, in train
train_result = trainer.train(resume_from_checkpoint=checkpoint_dir)

result = F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1536x4096 and 1x8388608)
可能是什么原因导致? 谢谢。

need to set `load_in_8bit_fp32_cpu_offload=True` and pass a custom `device_map` to `from_pretrained`.

(gh_qlora-chinese-LLM) ub2004@ub2004-B85M-A0:~/llm_dev/qlora-chinese-LLM$ python3 qlora.py --model_name="chatglm" --model_name_or_path="/data-ssd-1t/hf_model/chatglm-6b" --trust_remote_code=True --dataset="msra" --source_max_len=128 --target_max_len=64 --do_train --save_total_limit=1 --padding_side="left" --per_device_train_batch_size=8 --do_eval --bits=4 --save_steps=10 --gradient_accumulation_steps=1 --learning_rate=1e-5 --output_dir="./output/chatglm-6b/" --lora_r=8 --lora_alpha=32

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

bin /home/ub2004/anaconda3/envs/gh_qlora-chinese-LLM/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so
/home/ub2004/anaconda3/envs/gh_qlora-chinese-LLM/lib/python3.10/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
/home/ub2004/anaconda3/envs/gh_qlora-chinese-LLM/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32
CUDA SETUP: Loading binary /home/ub2004/anaconda3/envs/gh_qlora-chinese-LLM/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
loading base model /data-ssd-1t/hf_model/chatglm-6b...
Traceback (most recent call last):
File "/home/ub2004/llm_dev/qlora-chinese-LLM/qlora.py", line 1011, in
train()
File "/home/ub2004/llm_dev/qlora-chinese-LLM/qlora.py", line 836, in train
model = get_accelerate_model(args, checkpoint_dir)
File "/home/ub2004/llm_dev/qlora-chinese-LLM/qlora.py", line 375, in get_accelerate_model
model = model_class[args.model_name].from_pretrained(
File "/home/ub2004/llm_dev/qlora-chinese-LLM/transformers/src/transformers/models/auto/auto_factory.py", line 479, in from_pretrained
return model_class.from_pretrained(
File "/home/ub2004/llm_dev/qlora-chinese-LLM/transformers/src/transformers/modeling_utils.py", line 2819, in from_pretrained
raise ValueError(
ValueError:
Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit
the quantized model. If you want to dispatch the model on the CPU or the disk while keeping
these modules in 32-bit, you need to set load_in_8bit_fp32_cpu_offload=True and pass a custom
device_map to from_pretrained. Check
https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu
for more details.

(gh_qlora-chinese-LLM) ub2004@ub2004-B85M-A0:~/llm_dev/qlora-chinese-LLM$

无聊是bloom还是llama都是报错

ValueError: You can't train a model that has been loaded in 8-bit precision on a different device than the one you're training on.

或者报错ValueError: You can't train a model that has been loaded in 8-bit precision on multiple devices.

问题

不知道大佬有没有遇到ValueError: paged_adamw_32bit is not a valid OptimizerNames这个错误

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.