Giter VIP home page Giter VIP logo

ztxz16 / fastllm Goto Github PK

View Code? Open in Web Editor NEW
3.3K 41.0 332.0 23.19 MB

纯c++的全平台llm加速库,支持python调用,chatglm-6B级模型单卡可达10000+token / s,支持glm, llama, moss基座,手机端流畅运行

License: Apache License 2.0

CMake 0.42% C++ 73.10% Python 11.20% C 0.54% Cuda 8.71% CSS 0.03% HTML 3.14% JavaScript 1.11% Java 1.68% Dockerfile 0.03% Shell 0.02% QMake 0.01%

fastllm's Introduction

fastllm

English Document

介绍

fastllm是纯c++实现,无第三方依赖的多平台高性能大模型推理库

部署交流QQ群: 831641348

| 快速开始 | 模型获取 |

功能概述

  • 🚀 纯c++实现,便于跨平台移植,可以在安卓上直接编译
  • 🚀 无论ARM平台,X86平台,NVIDIA平台,速度都较快
  • 🚀 支持读取Hugging face原始模型并直接量化
  • 🚀 支持部署Openai api server
  • 🚀 支持多卡部署,支持GPU + CPU混合部署
  • 🚀 支持动态Batch,流式输出
  • 🚀 前后端分离设计,便于支持新的计算设备
  • 🚀 目前支持ChatGLM系列模型,Qwen系列模型,各种LLAMA模型(ALPACA, VICUNA等),BAICHUAN模型,MOSS模型,MINICPM模型等
  • 🚀 支持Python自定义模型结构

快速开始

编译

建议使用cmake编译,需要提前安装gcc,g++ (建议9.4以上), make, cmake (建议3.23以上)

GPU编译需要提前安装好CUDA编译环境,建议使用尽可能新的CUDA版本

使用如下命令编译

bash install.sh -DUSE_CUDA=ON # 编译GPU版本
# bash install.sh -DUSE_CUDA=ON -DCUDA_ARCH=89 # 可以指定CUDA架构,如4090使用89架构
# bash install.sh # 仅编译CPU版本

其他不同平台的编译可参考文档 TFACC平台

运行demo程序 (python)

假设我们的模型位于"~/Qwen2-7B-Instruct/"目录

编译完成后可以使用下列demo:

# openai api server
# 需要安装依赖: pip install -r requirements-server.txt
# 这里在8080端口打开了一个模型名为qwen的server
python3 -m ftllm.server -t 16 -p ~/Qwen2-7B-Instruct/ --port 8080 --model_name qwen

# 使用float16精度的模型对话
python3 -m ftllm.chat -t 16 -p ~/Qwen2-7B-Instruct/ 

# 在线量化为int8模型对话
python3 -m ftllm.chat -t 16 -p ~/Qwen2-7B-Instruct/ --dtype int8

# webui
# 需要安装依赖: pip install streamlit-chat
python3 -m ftllm.webui -t 16 -p ~/Qwen2-7B-Instruct/ --port 8080

以上demo均可使用参数 --help 查看详细参数,详细参数说明可参考 参数说明

目前模型的支持情况见: 模型列表

一些早期的HuggingFace模型无法直接读取,可以参考 模型转换 转换fastllm格式的模型

可以自定义模型结构,具体见 自定义模型

运行demo程序 (c++)

# 进入fastllm/build-fastllm目录

# 命令行聊天程序, 支持打字机效果
./main -p ~/Qwen2-7B-Instruct/ 

# 简易webui, 使用流式输出 + 动态batch,可多路并发访问
./webui -p ~/Qwen2-7B-Instruct/ --port 1234 

Windows下的编译推荐使用Cmake GUI + Visual Studio,在图形化界面中完成。

如编译中存在问题,尤其是Windows下的编译,可参考FAQ

python API

# 模型创建
from ftllm import llm
model = llm.model("~/Qwen2-7B-Instruct/")

# 生成回复
print(model.response("你好"))

# 流式生成回复
for response in model.stream_response("你好"):
    print(response, flush = True, end = "")

另外还可以设置cpu线程数等内容,详细API说明见 ftllm

这个包不包含low level api,如果需要使用更深入的功能请参考 Python绑定API

多卡部署

python命令行调用中使用多卡部署

# 使用参数--device来设置多卡调用
#--device cuda:1 # 设置单一设备
#--device "['cuda:0', 'cuda:1']" # 将模型平均部署在多个设备上
#--device "{'cuda:0': 10, 'cuda:1': 5, 'cpu': 1} # 将模型按不同比例部署在多个设备上

ftllm中使用多卡部署

from ftllm import llm
# 支持下列三种方式,需要在模型创建之前调用
llm.set_device_map("cuda:0") # 将模型部署在单一设备上
llm.set_device_map(["cuda:0", "cuda:1"]) # 将模型平均部署在多个设备上
llm.set_device_map({"cuda:0" : 10, "cuda:1" : 5, "cpu": 1}) # 将模型按不同比例部署在多个设备上

Python绑定API中使用多卡部署

import pyfastllm as llm
# 支持以下方式,需要在模型创建之前调用
llm.set_device_map({"cuda:0" : 10, "cuda:1" : 5, "cpu": 1}) # 将模型按不同比例部署在多个设备上

c++中使用多卡部署

// 支持以下方式,需要在模型创建之前调用
fastllm::SetDeviceMap({{"cuda:0", 10}, {"cuda:1", 5}, {"cpu", 1}}); // 将模型按不同比例部署在多个设备上

Docker 编译运行

docker 运行需要本地安装好 NVIDIA Runtime,且修改默认 runtime 为 nvidia

  1. 安装 nvidia-container-runtime
sudo apt-get install nvidia-container-runtime
  1. 修改 docker 默认 runtime 为 nvidia

/etc/docker/daemon.json

{
  "registry-mirrors": [
    "https://hub-mirror.c.163.com",
    "https://mirror.baidubce.com"
  ],
  "runtimes": {
      "nvidia": {
          "path": "/usr/bin/nvidia-container-runtime",
          "runtimeArgs": []
      }
   },
   "default-runtime": "nvidia" // 有这一行即可
}

  1. 下载已经转好的模型到 models 目录下
models
  chatglm2-6b-fp16.flm
  chatglm2-6b-int8.flm
  1. 编译并启动 webui
DOCKER_BUILDKIT=0 docker compose up -d --build

Android上使用

编译

# 在PC上编译需要下载NDK工具
# 还可以尝试使用手机端编译,在termux中可以使用cmake和gcc(不需要使用NDK)
mkdir build-android
cd build-android
export NDK=<your_ndk_directory>
# 如果手机不支持,那么去掉 "-DCMAKE_CXX_FLAGS=-march=armv8.2a+dotprod" (比较新的手机都是支持的)
cmake -DCMAKE_TOOLCHAIN_FILE=$NDK/build/cmake/android.toolchain.cmake -DANDROID_ABI=arm64-v8a -DANDROID_PLATFORM=android-23 -DCMAKE_CXX_FLAGS=-march=armv8.2a+dotprod ..
make -j

运行

  1. 在Android设备上安装termux软件
  2. 在termux中执行termux-setup-storage获得读取手机文件的权限。
  3. 将NDK编译出的main文件,以及模型文件存入手机,并拷贝到termux的根目录
  4. 使用命令chmod 777 main赋权
  5. 然后可以运行main文件,参数格式参见./main --help

fastllm's People

Contributors

255-1 avatar aofengdaxia avatar bjmsong avatar caseylai avatar colorfuldick avatar denghongcai avatar dongkid avatar felix-fei-fei avatar fluxlinkage avatar helloimcx avatar hubin858130 avatar jacques-chen avatar jiewlmrh avatar kiranosora avatar leomax-xiong avatar levinxo avatar lockmatrix avatar lxrite avatar mistsun-chen avatar purpleroc avatar siemonchan avatar tiansztiansz avatar tylunasli avatar wangyumu avatar wangzhaode avatar wheylop avatar wildkid1024 avatar xinaiwunai avatar yuanphoenix avatar ztxz16 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fastllm's Issues

quant 方法

请问是否有计划实现类似ggml采取更加灵活的量化方法,如Q4_1, q3_k_m

Segmentation fault (core dumped)

流程:
(1)根据readme.md,构建编译环境,状态:编译成功(存在main、quant文件);
(2)将自己微调(ptv2)后的模型导出为浮点模型(/root/autodl-tmp/chatglm-6b.bin,大小25GB),状态:导出成功;
(3)利用命令./quant -m chatglm -p /root/autodl-tmp/chatglm-6b.bin -o /root/autodl-tmp/chatglm-6b-int8.bin -b 8 将浮点模型导出为int8模型,此步骤发生报错。

报错信息如下:
Segmentation fault (core dumped)
image

环境:
服务器平台-AutoDL
Ubantu-20.04
CMake-3.16.3
CUDA-11.3
GPU-RTX2080Ti(11GB)
内存-40GB

make编译出错

系统:Ubuntu-18.04 x86_64
GPU: P100
编译失败报错: make -j4
[ 10%] Building CXX object CMakeFiles/fastllm.dir/src/fastllm.cpp.o
/data/code/fastllm/src/fastllm.cpp: In function ‘int fastllm::DotU4U8(uint8_t*, uint8_t*, int)’:
/data/code/fastllm/src/fastllm.cpp:192:20: error: ‘_mm256_set_m128i’ was not declared in this scope
__m256i bytex = _mm256_set_m128i(_mm_srli_epi16(orix, 4), orix);
^~~~~~~~~~~~~~~~
/data/code/fastllm/src/fastllm.cpp:192:20: note: suggested alternative: ‘_mm256_set_epi8’
__m256i bytex = _mm256_set_m128i(_mm_srli_epi16(orix, 4), orix);
^~~~~~~~~~~~~~~~
_mm256_set_epi8
/data/code/fastllm/src/fastllm.cpp: In member function ‘void fastllm::Data::CalcWeightSum()’:
/data/code/fastllm/src/fastllm.cpp:706:31: error: ‘_mm256_set_m128i’ was not declared in this scope
__m256i bytex = _mm256_set_m128i(_mm_srli_epi16(orix, 4), orix);
^~~~~~~~~~~~~~~~
/data/code/fastllm/src/fastllm.cpp:706:31: note: suggested alternative: ‘_mm256_set_epi8’
__m256i bytex = _mm256_set_m128i(_mm_srli_epi16(orix, 4), orix);
^~~~~~~~~~~~~~~~
_mm256_set_epi8
CMakeFiles/fastllm.dir/build.make:79: recipe for target 'CMakeFiles/fastllm.dir/src/fastllm.cpp.o' failed
make[2]: *** [CMakeFiles/fastllm.dir/src/fastllm.cpp.o] Error 1
CMakeFiles/Makefile2:179: recipe for target 'CMakeFiles/fastllm.dir/all' failed
make[1]: *** [CMakeFiles/fastllm.dir/all] Error 2
Makefile:100: recipe for target 'all' failed
make: *** [all] Error 2

请教下楼主的编译环境?怎么解决编译问题?
后续有支持chatglm-16的计划吗?代码在其他芯片能否支持(P100, v100,T4...)
感谢

是否也考虑支持一下bloom

您好、是否也考虑支持一下bloom、结构上应该和llama差不多、但是bloom有比较多不同size的模型,更适合移动端的场景,可能能让这个项目更丰富

能否支持windows平台?

在linux上编译了,运行很快。但是在windows上编译失败。
我希望能在windows上运行,再结合whisper和VITS就能实现实时对话的AI了!

cuda运行出错

FastllmCudaBatchMatMul函数中调用的cublasSgemmStridedBatched,返回的status错误码为15,导致无法进行推理。请问您那边可以正常运行cuda版本吗

Android端运行,文件传输问题

ReadMe中讲需要把main和模型push到termux目录下
当前目录是/data/data/com.termux/files/home/
但是adb push传输文件时会提示Permission denied

请问是否必须要将手机root,需要通过什么方式将文件传输至termux目录下呢?
谢谢!

加载模型时出现bug

FastLLM Error: FileBuffer.ReadInt error.

terminate called after throwing an instance of 'std::__cxx11::basic_string<char, std::char_traits, std::allocator >'
已放弃

hf 模型转 llm模型时报错

使用如下命令安装fastllm_pytools包

cd fastllm
mkdir build
cd build
cmake .. -DUSE_CUDA=ON
make -j
cd tools && python setup.py install

from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm2-6b", trust_remote_code = True)
model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code = True)
from fastllm_pytools import llm
model = llm.from_hf(model, tokenizer, dtype = "float16") 

root@7c296f76e678:/home/user/code/build/tools# python3
Python 3.10.6 (main, May 16 2023, 09:56:28) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import AutoTokenizer, AutoModel
>>> model='/home/user/code/chatglm2-6b'
>>> tokenizer = AutoTokenizer.from_pretrained(model, trust_remote_code = True)
>>> model = AutoModel.from_pretrained(model, trust_remote_code = True)
Loading checkpoint shards:  71%|████████████████████████████████████████               | 4/7 [00:07<00:05,  1.77s/it 
Loading checkpoint shards:  86%|████████████████████████████████████████                                                                                     
Loading checkpoint shards: 100%|████████████████████████████████████████                                                                                     
Loading checkpoint shards: 100%|████████████████████████████████████████              | 7/7 [00:11<00:00,  1.68s/it]
>>> from fastllm_pytools import llm
>>> model = llm.from_hf(model, tokenizer, dtype = "float16")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/user/code/build/tools/fastllm_pytools/llm.py", line 35, in from_hf
    return hf_model.create(model, tokenizer, dtype = dtype);
  File "/home/user/code/build/tools/fastllm_pytools/hf_model.py", line 49, in create
    model_type = model.config.__dict__["model_type"];
KeyError: 'model_type'

使用的 ChatGLM2 模型,model = llm.from_hf(model, tokenizer, dtype = "float16") 这一步报错了

在机器上执行cmake -j步骤时报错

  1. 在Ubuntu 20.4的Docker容器上编译安装
  2. GCC 9和GCC 11版本都试过了,一样的报错
  3. 机器是老的Dell R720,可能CPU比较老

麻烦帮忙看下

In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:47,
                 from /root/app/fastllm/include/utils/utils.h:21,
                 from /root/app/fastllm/src/fastllm.cpp:5:
/usr/lib/gcc/x86_64-linux-gnu/11/include/avx2intrin.h: In member function 'void fastllm::Data::CalcWeightSum()':
/usr/lib/gcc/x86_64-linux-gnu/11/include/avx2intrin.h:119:1: error: inlining failed in call to 'always_inline' '__m256i _mm256_add_epi32(__m256i, __m256i)': target specific option mismatch
  119 | _mm256_add_epi32 (__m256i __A, __m256i __B)
      | ^~~~~~~~~~~~~~~~
/root/app/fastllm/src/fastllm.cpp:464:43: note: called from here
  464 |                     acc = _mm256_add_epi32(acc, _mm256_madd_epi16(mx1, ones));
      |                           ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:47,
                 from /root/app/fastllm/include/utils/utils.h:21,
                 from /root/app/fastllm/src/fastllm.cpp:5:
/usr/lib/gcc/x86_64-linux-gnu/11/include/avx2intrin.h:341:1: error: inlining failed in call to 'always_inline' '__m256i _mm256_madd_epi16(__m256i, __m256i)': target specific option mismatch
  341 | _mm256_madd_epi16 (__m256i __A, __m256i __B)
      | ^~~~~~~~~~~~~~~~~
/root/app/fastllm/src/fastllm.cpp:464:43: note: called from here
  464 |                     acc = _mm256_add_epi32(acc, _mm256_madd_epi16(mx1, ones));
      |                           ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:47,
                 from /root/app/fastllm/include/utils/utils.h:21,
                 from /root/app/fastllm/src/fastllm.cpp:5:
/usr/lib/gcc/x86_64-linux-gnu/11/include/avx2intrin.h:119:1: error: inlining failed in call to 'always_inline' '__m256i _mm256_add_epi32(__m256i, __m256i)': target specific option mismatch
  119 | _mm256_add_epi32 (__m256i __A, __m256i __B)
      | ^~~~~~~~~~~~~~~~
/root/app/fastllm/src/fastllm.cpp:463:43: note: called from here
  463 |                     acc = _mm256_add_epi32(acc, _mm256_madd_epi16(mx0, ones));
      |                           ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:47,
                 from /root/app/fastllm/include/utils/utils.h:21,
                 from /root/app/fastllm/src/fastllm.cpp:5:
/usr/lib/gcc/x86_64-linux-gnu/11/include/avx2intrin.h:341:1: error: inlining failed in call to 'always_inline' '__m256i _mm256_madd_epi16(__m256i, __m256i)': target specific option mismatch
  341 | _mm256_madd_epi16 (__m256i __A, __m256i __B)
      | ^~~~~~~~~~~~~~~~~
/root/app/fastllm/src/fastllm.cpp:463:43: note: called from here
  463 |                     acc = _mm256_add_epi32(acc, _mm256_madd_epi16(mx0, ones));
      |                           ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:47,
                 from /root/app/fastllm/include/utils/utils.h:21,
                 from /root/app/fastllm/src/fastllm.cpp:5:
/usr/lib/gcc/x86_64-linux-gnu/11/include/avx2intrin.h:482:1: error: inlining failed in call to 'always_inline' '__m256i _mm256_cvtepu8_epi16(__m128i)': target specific option mismatch
  482 | _mm256_cvtepu8_epi16 (__m128i __X)
      | ^~~~~~~~~~~~~~~~~~~~
/root/app/fastllm/src/fastllm.cpp:462:55: note: called from here
  462 |                     __m256i mx1 = _mm256_cvtepu8_epi16(_mm256_extractf128_si256(ax, 1));
      |                                   ~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
....<略>.....
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:47,
                 from /root/app/fastllm/include/utils/utils.h:21,
                 from /root/app/fastllm/src/fastllm.cpp:5:
/usr/lib/gcc/x86_64-linux-gnu/11/include/avx2intrin.h:341:1: error: inlining failed in call to 'always_inline' '__m256i _mm256_madd_epi16(__m256i, __m256i)': target specific option mismatch
  341 | _mm256_madd_epi16 (__m256i __A, __m256i __B)
      | ^~~~~~~~~~~~~~~~~
/root/app/fastllm/src/fastllm.cpp:514:51: note: called from here
  514 |                             acc = _mm256_add_epi32(acc, _mm256_madd_epi16(mx0, ones));
      |                                   ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:47,
                 from /root/app/fastllm/include/utils/utils.h:21,
                 from /root/app/fastllm/src/fastllm.cpp:5:
/usr/lib/gcc/x86_64-linux-gnu/11/include/avx2intrin.h:482:1: error: inlining failed in call to 'always_inline' '__m256i _mm256_cvtepu8_epi16(__m128i)': target specific option mismatch
  482 | _mm256_cvtepu8_epi16 (__m128i __X)
      | ^~~~~~~~~~~~~~~~~~~~
/root/app/fastllm/src/fastllm.cpp:512:63: note: called from here
  512 |                             __m256i mx1 = _mm256_cvtepu8_epi16(_mm256_extractf128_si256(bx, 1));
      |                                           ~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
make[2]: *** [CMakeFiles/fastllm.dir/build.make:76: CMakeFiles/fastllm.dir/src/fastllm.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:223: CMakeFiles/fastllm_tools.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
make[1]: *** [CMakeFiles/Makefile2:93: CMakeFiles/fastllm.dir/all] Error 2
make: *** [Makefile:91: all] Error 2

推理结果 ChatGLM:<eop><eop><eop><eop><eop>

转换完成的模型,推理结果只有是什么原因?

  1. tools/chatglm_export.py ../chatglm-6b.bin
  2. (CPU) build/quant -m chatglm -p chatglm-6b.bin -o chatglm-6b-int8.bin -b 8
  3. 推理(CPU/GPU): build/main -m chatglm -p chatglm-6b-int8.bin
    结果

Load (368 / 368)
Warmup...
finish.
用户: 1
ChatGLM: '<eop...'

请教是什么原因?

centos 安装报错

$ cd /opt/jtmodel/chatgpt-mi10/fastllm/build
$ cmake ..

报错:
-- USE_CUDA: OFF
-- CMAKE_CXX_FLAGS -pthread --std=c++17 -O2 -march=native
CMake Error at CMakeLists.txt:35 (target_link_libraries):
Object library target "fastllm" may not link to anything.

-- Configuring incomplete, errors occurred!
See also "/opt/jtmodel/chatgpt-mi10/fastllm/build/CMakeFiles/CMakeOutput.log".

pyfastllm 编译问题:pybind11子模块拉取

pyfastllm编译需要pybind11模块,执行下初始化和更新子模块再执行编译。
cd build-py
git submodule init && git submodule update
cmake .. -DUSE_CUDA=ON -DPY_API=ON
make -j4
...

反馈一个bug,关于Tokenizer

在ChatGLM的Official实现中,token采用了import sentencepiece as spm,这样的一个库,这个库在 self.sp.EncodeAsPieces(text),这一句会把英文单词比如“hello”处理成"▁hello",注意前面的两个杠不是下划线。这应该是最标准的方式,而本项目好像没有做类似的处理。

首个token时延问题

在V100, cuda 11.8 , gcc 11.3环境下测试生成速度,bs=1, 输入query(prompt+query)长度900+,存在首个token时延长的问题:

  1. fastllm:
    • (1)输入长度 1200, 首个token 3.32s, 其他token 20ms
    • (2)输入长度 1200, 首个token 3.32s, 其他token 20ms
    • (3)...
  2. torch
    • (1)输入长度 1200, 首个token 1.56s, 其他token 51ms
    • (2)输入长度 1200, 首个token 0.12s, 其他token 51ms
    • (3)输入长度 1200, 首个token 0.11s, 其他token 51ms
    • (4)...
      结果是fastllm的首个token时延随query长度变化很大,且每个query都会有一样的实验,torch会随着query递减。结果长度短的(<50 token)情况下,对比torch没有优势。

编译warning

-- The CXX compiler identification is GNU 7.3.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- USE_CUDA: ON
-- PYTHON_API: OFF
-- CMAKE_CXX_FLAGS -pthread --std=c++17 -O2 -march=native
-- The CUDA compiler identification is NVIDIA 11.8.89
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Configuring done (1.8s)
CMake Warning (dev) in CMakeLists.txt:
  Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
  empty CUDA_ARCHITECTURES not allowed.  Run "cmake --help-policy CMP0104"
  for policy details.  Use the cmake_policy command to set the policy and
  suppress this warning.

  CUDA_ARCHITECTURES is empty for target "fastllm".
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) in CMakeLists.txt:
  Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
  empty CUDA_ARCHITECTURES not allowed.  Run "cmake --help-policy CMP0104"
  for policy details.  Use the cmake_policy command to set the policy and
  suppress this warning.

  CUDA_ARCHITECTURES is empty for target "main".
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) in CMakeLists.txt:
  Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
  empty CUDA_ARCHITECTURES not allowed.  Run "cmake --help-policy CMP0104"
  for policy details.  Use the cmake_policy command to set the policy and
  suppress this warning.

  CUDA_ARCHITECTURES is empty for target "quant".
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) in CMakeLists.txt:
  Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
  empty CUDA_ARCHITECTURES not allowed.  Run "cmake --help-policy CMP0104"
  for policy details.  Use the cmake_policy command to set the policy and
  suppress this warning.

  CUDA_ARCHITECTURES is empty for target "webui".
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) in CMakeLists.txt:
  Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
  empty CUDA_ARCHITECTURES not allowed.  Run "cmake --help-policy CMP0104"
  for policy details.  Use the cmake_policy command to set the policy and
  suppress this warning.

  CUDA_ARCHITECTURES is empty for target "benchmark".
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) in CMakeLists.txt:
  Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
  empty CUDA_ARCHITECTURES not allowed.  Run "cmake --help-policy CMP0104"
  for policy details.  Use the cmake_policy command to set the policy and
  suppress this warning.

  CUDA_ARCHITECTURES is empty for target "fastllm_tools".
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Generating done (0.0s)
-- Build files have been written to: /users_3/fastllm/build

这个正常么

流式输出

很棒的work!
请教下,量化之后,模型输出支持流式返回吗

关于量化的细节

请问下关于linear层的int8和int4的量化策略是怎样的呢?以及推理时是还原成fp16再计算的吗

TP多卡部署

后续支持tp切分多卡部署吗?看FasterTransformer Bloom-7b的方案做tp切分,速度会有明显提升

量化问题

量化目前是只量化权重吗,有考虑对activation的量化吗

quant转换模型出现的问题

在执行 ./quant -p chatglm-6b-fp32.flm -o chatglm-6b-fp16.flm -b 16出现以下问题
FastLLM Error: Unkown model type: unknown
terminate called after throwing an instance of 'std::string'
Aborted (core dumped)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.