Comments (8)
使用替代方案GPT2Tokenizer支持多线程:https://huggingface.co/vonjack/Qwen-LLaMAfied-HFTok-7B-Chat/tree/main
from qwen.
可能是LLaMA-Efficient-Tuning使用的HuggingFace datasets版本上的问题,请尝试升级datasets版本到最新看看?
以下是MWE,datasets新版可正常运行
from transformers import AutoTokenizer
from datasets import load_dataset
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-7B", trust_remote_code=True)
def process(example):
ids = tokenizer.encode(example['text'])
out = {'ids': ids, 'len': len(ids)}
return out
dataset = load_dataset("stas/openwebtext-10k") # just an example
tokenized = dataset.map(
process,
remove_columns=['text'],
desc="tokenizing the OWT splits",
num_proc=3,
)
参见
datasets commit: huggingface/datasets#5552
datasets issue: huggingface/datasets#5769
LLaMA-Efficient-Tuning issue: hiyouga/LLaMA-Factory#328
from qwen.
datasets中的多进程处理逻辑我们无法控制。一般而言,多进程tokenize最好在进程中初始化tokenizer,避免进程间传递tokenizer对象,可能会触发意外问题。
from qwen.
你好,请问一下方便提供更详细的代码让我们复现吗?
from qwen.
@geekinglcq 感谢回复,使用的训练框架是https://github.com/hiyouga/LLaMA-Efficient-Tuning 这个仓库,将preprocessing_num_workers设置超过1就会报这个错,我的脚本是这样的:
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6 accelerate launch --num_processes=7 src/train_bash.py \
--stage sft \
--deepspeed configs/ds_zero2.json \
--lora_target q_proj,k_proj,v_proj,o_proj,gate_proj,up_proj,down_proj \
--template vicuna \
--model_name_or_path ../Qwen-7B \
--do_train \
--dataset alpaca_gpt4_zh \
--finetuning_type full \
--warmup_ratio 0.03 \
--output_dir outputs/qwen-7b-sft \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 8 \
--preprocessing_num_workers 12 \
--lr_scheduler_type cosine \
--evaluation_strategy steps \
--eval_steps 100 \
--logging_steps 1 \
--save_steps 100 \
--save_total_limit 3 \
--learning_rate 2e-5 \
--dev_ratio 0.001 \
--num_train_epochs 3 \
--resume_lora_training True \
--plot_loss \
--report_to wandb \
--fp16 \
--tf32 True
from qwen.
我在text-generation-webui里调用,也只能用到1个CPU线程,推理超慢无比,开了个issue在这里,没人搭理……
from qwen.
@skepsun 请问你解决了吗?除了改为单线程,还可以怎么解决?升级到datasets最近版本问题依然
from qwen.
您好,请问像下面这样子写能 work 吗?
import os
import threading
from transformers import AutoTokenizer
tokenizer_dict = {}
def process(example):
k = str(os.getpid()) + str(threading.get_ident())
if k not in tokenizer_dict:
for _ in range(100): # try multiple times when the network is unreliable
try:
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-7B", trust_remote_code=True)
break
except Exception:
pass
tokenizer_dict[k] = tokenizer
else:
tokenizer = tokenizer_dict[k]
ids = tokenizer.encode(example["text"])
out = {"ids": ids, "len": len(ids)}
return out
from qwen.
Related Issues (20)
- 如何添加`LogitsProcessor`控制结果输出?
- [BUG] <title>lora微调loss异常? HOT 5
- tokenizer.decoder 抛出'utf-8' codec can't decode bytes in position 1-2: unexpected end of data异常 HOT 2
- [BUG] lora微调后,合并成一个模型。这种方式如何加载且推理 HOT 3
- [BUG] Qwen/Qwen-72B-Chat-Int8,不能多GPU并行计算 HOT 1
- Qwen/eval中的评测CEval和CMMLU,开大推理的batchsize评测指标会显著降低
- 请问基于qwen-72b-chat,基于怎样的配置可以在一台4090上训练起来? HOT 4
- 💡 [REQUEST] - <title> 关于lora 模型合并的几个问题 HOT 2
- [BUG] <关于model.generate时发现的源码错误> HOT 2
- [BUG] <Qwen-14B-Chat 输入长文本时无输出结果> HOT 5
- [BUG] Function Calling 示例有错误,最新的 openai sdk 运行时提示 api 已经废弃 HOT 1
- 请问哪里可以找到qwen用于vllm的jinja template? HOT 1
- [BUG] <title>执行eval中的eval_plugin进行评测 有一个agent从huggingface_hub拉包错误 HOT 1
- 请问可以使用高通的npu进行部署和推理吗? HOT 1
- 微调完成后使用llama_factory的vllm和qwen官方的vllm部署方式启动返回的不一样 HOT 1
- 💡 [REQUEST] - <使用ollama来调用qwen:14B时,怎么设置输出文本长度呢> HOT 1
- [BUG] <title>fastchat + vLLM +OpenAI API 调用qwen模型,数据不需要预先处理吗 HOT 1
- 本地部署后,运行很慢啊 HOT 3
- 请问下 2.5什么时候开源呀? HOT 1
- 关于device_map
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from qwen.