Comments (3)
不好用呀
from baichuan2.
root@58c8455c9d58:/home/model_hub# CUDA_VISIBLE_DEVICES=1,2 python3.9 -m fastchat.serve.cli --model-path Baichuan2-13B-Chat-V1 --num-gpus 2
Xformers is not installed correctly. If you want to use memory_efficient_attention to accelerate training use the following command to install Xformers
pip install xformers.
You are using an old version of the checkpointing format that is deprecated (We will also silently ignore gradient_checkpointing_kwargs
in case you passed it).Please update to the new format on your modeling file. To use the new format, you need to completely remove the definition of the method _set_gradient_checkpointing
in your model.
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s]/usr/local/lib/python3.9/dist-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████| 3/3 [00:12<00:00, 4.14s/it]
<reserved_106>: hallo
<reserved_107>: Traceback (most recent call last):
File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/cli.py", line 304, in
main(args)
File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/cli.py", line 227, in main
chat_loop(
File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/inference.py", line 532, in chat_loop
outputs = chatio.stream_output(output_stream)
File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/cli.py", line 63, in stream_output
for outputs in output_stream:
File "/usr/local/lib/python3.9/dist-packages/torch/utils/_contextlib.py", line 56, in generator_context
response = gen.send(request)
File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/inference.py", line 190, in generate_stream
indices = torch.multinomial(probs, num_samples=2)
RuntimeError: probability tensor contains either inf
, nan
or element < 0
为啥会报这个错呀
from baichuan2.
root@58c8455c9d58:/home/model_hub# CUDA_VISIBLE_DEVICES=1,2 python3.9 -m fastchat.serve.cli --model-path Baichuan2-13B-Chat-V1 --num-gpus 2 Xformers is not installed correctly. If you want to use memory_efficient_attention to accelerate training use the following command to install Xformers pip install xformers. You are using an old version of the checkpointing format that is deprecated (We will also silently ignore
gradient_checkpointing_kwargs
in case you passed it).Please update to the new format on your modeling file. To use the new format, you need to completely remove the definition of the method_set_gradient_checkpointing
in your model. Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s]/usr/local/lib/python3.9/dist-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.get(instance, owner)() Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████| 3/3 [00:12<00:00, 4.14s/it] <reserved_106>: hallo <reserved_107>: Traceback (most recent call last): File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/cli.py", line 304, in main(args) File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/cli.py", line 227, in main chat_loop( File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/inference.py", line 532, in chat_loop outputs = chatio.stream_output(output_stream) File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/cli.py", line 63, in stream_output for outputs in output_stream: File "/usr/local/lib/python3.9/dist-packages/torch/utils/_contextlib.py", line 56, in generator_context response = gen.send(request) File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/inference.py", line 190, in generate_stream indices = torch.multinomial(probs, num_samples=2) RuntimeError: probability tensor contains eitherinf
,nan
or element < 0为啥会报这个错呀
这个是baichuan2-v1.0版本
from baichuan2.
Related Issues (20)
- 调用接口时CPU100% HOT 1
- 请问有开源1B左右模型的计划吗
- Performance drop when deployed with TGI
- 请问是否有办法能扩大输入窗口到8k呢?
- Baichuan2 Chat Template HOT 6
- Baichuan2 7B和13B的模型训练数据和数据的训练顺序是否一致?
- base模型推理pred和inputs完全一样 HOT 2
- Baichuan2-7B-Base微调报错 AttributeError: 'BaichuanConfig' object has no attribute 'z_loss_weight'AttributeError: 'BaichuanConfig' object has no attribute 'z_loss_weight' HOT 1
- LLM相同输入,多次输出不一样
- 使用fastgpt需要流式接口,请求支持 HOT 1
- 使用fastgpt框架对接baichuan2需要流式接口,请求支持 HOT 1
- 一块V100的卡,跑13b openai的启动脚本。短文本没事,文本一长就报CUDA error: out of memory。
- Baichuan2-13B-Chat-4bits 跑不起来 HOT 2
- baichuan2-13B-chat 微调loss 一直为0 HOT 2
- baichuan2-7B-chat 微调使用TrainerCallback,报错
- baichuan2-13b 微调后模型使用vllm输出与官方web_demo结果不一致
- 请问模型 Baichuan2-13B-Chat-4bits 支持MAC吗?
- 数据集
- 我使用了lora微调训练的4个epoch,但是模型还没有收敛,如何从保存的checkpoint继续训练
- 输入窗口是多少呢
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from baichuan2.